diff --git a/.gitattributes b/.gitattributes index 59c55597a8f5dcb868b7305a2aa78c4f5cd5263f..0a429f798076fa50108596e78ade5e465d97de4c 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1252,3 +1252,11 @@ data/2025/2504_07xxx/2504.07943/9854c588-fbd0-47a1-b560-4e8c5b07fb00_origin.pdf data/2025/2504_07xxx/2504.07956/233f7388-cf46-41c3-99bf-1eb30e12bcd2_origin.pdf filter=lfs diff=lfs merge=lfs -text data/2025/2504_08xxx/2504.08837/8cb49279-0a74-44c0-aaf5-baf8779e12d9_origin.pdf filter=lfs diff=lfs merge=lfs -text data/2025/2504_13xxx/2504.13914/ff11ce5d-6bb3-4214-9c75-cd867f0e0926_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_content_list.json b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..741da0dc14506bea5411d8d6b8988b38cf1fdcc4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_content_list.json @@ -0,0 +1,2688 @@ +[ + { + "type": "text", + "text": "LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation", + "text_level": 1, + "bbox": [ + 171, + 98, + 823, + 140 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Juzheng Zhang $^{1}$ , Jiacheng You $^{2}$ , Ashwinee Panda $^{1}$ , Tom Goldstein $^{1}$", + "bbox": [ + 181, + 165, + 750, + 181 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ University of Maryland $^{2}$ Tsinghua University", + "bbox": [ + 181, + 181, + 542, + 198 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 457, + 233, + 539, + 250 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to $95\\%$ fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI.", + "bbox": [ + 228, + 265, + 767, + 501 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 527, + 318, + 544 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2023) have transformed deep learning, showcasing remarkable capabilities across various domains. However, their deployment remains computationally demanding, particularly when fine-tuning is required to adapt to downstream tasks or align with human preferences. To mitigate the high resource costs, researchers have developed a range of parameter-efficient fine-tuning (PEFT) techniques. Among these techniques, LoRA (Hu et al., 2021) has gained widespread adoption due to its compelling balance of performance and efficiency. Nevertheless, LoRA still introduces notable memory overhead, particularly in large-scale models. Consequently, recent research has focused on further optimizing LoRA by reducing the number of trainable parameters without compromising performance (Kopiczko et al., 2023; Ding et al., 2023; Zhang et al., 2023b).", + "bbox": [ + 169, + 559, + 826, + 715 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent studies (Yu et al., 2024; Panda et al., 2024) have shown that delta parameters – the differences between fine-tuned and pretrained model weights – exhibit significant redundancy. Furthermore, previous works (Zhang et al., 2023b; Zhu et al., 2024) have observed that freezing matrices $A$ in LoRA often achieves comparable performance to training them. Motivated by these findings, we propose LoRA with Reduced Interference (LoRI). LoRI keeps matrices $A$ fixed as random projections, while training matrices $B$ using task-specific sparse masks. To retain the most critical elements of $B$ , LoRI performs a calibration process to extract sparse masks by selecting the highest-magnitude elements across all layers and projections. As shown in Figure 1(a), LoRI maintains performance even with $90\\%$ sparsity in $B$ while keeping $A$ frozen. This demonstrates that adaptation does not require updating $A$ , and that $B$ has considerable redundancy. By applying more constrained updates than LoRA, LoRI significantly reduces the number of trainable parameters while better preserving the pretrained model's knowledge during adaptation.", + "bbox": [ + 169, + 719, + 826, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07448v2 [cs.LG] 2 Aug 2025", + "bbox": [ + 22, + 281, + 60, + 715 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Correspondence to: juzheng@umd.edu.", + "bbox": [ + 197, + 910, + 447, + 924 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg", + "image_caption": [ + "Figure 1: (a) Varying sparsity ratios in matrices $B$ while freezing $A$ . Performance remains stable even at $90\\%$ sparsity in matrices $B$ . (b) Merging three adapters via weighted averaging. LoRA suffers degradation due to parameter interference, while LoRI preserves task performance. (c) Continual learning from Safety to NLU. LoRA suffers from catastrophic forgetting, while LoRI retains safety alignment. Results for NLU are averaged over eight tasks. GSM8K accuracy (Math), HumanEval pass@10 (Code), and HEx-PHI refusal rate (Safety) are reported individually. Base model: Llama-3-8B, rank $r = 32$ ." + ], + "image_footnote": [], + "bbox": [ + 187, + 104, + 816, + 251 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Multi-task learning is essential for enabling versatile models with multi-task capabilities, which is traditionally performed via joint training on a combination of task-specific datasets (Caruana, 1997; Sener & Koltun, 2018). However, training large models on this data mixture is prohibitively expensive in terms of time and compute. Model merging is a training-free alternative for building powerful models by combining existing ones (Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). This approach is well-suited for merging LoRA adapters, enabling multi-task capabilities within a single model during inference (Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024). However, as shown in Figure 1(b), directly merging heterogeneous LoRAs often results in parameter interference, leading to degraded performance compared to single-task LoRAs. Additionally, many existing merging methods require trial-and-error to identify the optimal method for a specific combination of tasks. LoRI addresses these challenges by using fixed, randomly initialized projection $A$ , which maps task-specific adapters into approximately orthogonal subspaces. This reduces interference when merging multiple adapters. In addition, LoRI enables adapter merging without manual selection of merging methods.", + "bbox": [ + 169, + 367, + 826, + 578 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Beyond multi-tasking, safety-critical scenarios require that each newly introduced adapter enhances model capabilities while preserving the safety alignment of the pretrained base model (Qi et al., 2023). LoRI provides a lightweight continual learning approach for adapting models while preserving safety, where training is performed sequentially across tasks (Lopez-Paz & Ranzato, 2017; Wu et al., 2022; Ouyang et al., 2022). The strategy involves first fine-tuning an adapter on safety data to establish alignment, followed by separate adaptation to each downstream task. However, as illustrated in Figure 1(c), continual learning often leads to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), wherein the adaptation to new tasks substantially compromises previously acquired knowledge. LoRI mitigates forgetting by leveraging the sparsity of projection $B$ through task-specific masks. This isolation of parameter updates across tasks facilitates continual learning with minimal interference, preserving both safety and task effectiveness.", + "bbox": [ + 169, + 583, + 823, + 750 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To evaluate the effectiveness of LoRI, we conduct extensive experiments across a diverse suite of benchmarks spanning natural language understanding (NLU), mathematical reasoning, code generation, and safety alignment tasks. Using Llama-3-8B and Mistral-7B as base models, our results show that LoRI achieves performance comparable to - or better than - full fine-tuning (FFT), LoRA, and other PEFT methods, while using up to $95\\%$ fewer trainable parameters than LoRA. Notably, LoRI with $90\\%$ sparsity in $B$ surpasses LoRA by $17.3\\%$ on HumanEval with Llama-3. Beyond single-task adaptation, we evaluate LoRI in multi-task settings, including adapter merging and continual learning scenarios. Concatenated merging of LoRI adapters consistently outperforms LoRA adapters overall, closely matching the performance of single-task LoRA baseline. In continual learning, LoRI significantly outperforms LoRA in mitigating catastrophic forgetting of safety alignment, while maintaining strong performance on downstream tasks.", + "bbox": [ + 169, + 757, + 826, + 925 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg", + "image_caption": [ + "(a) LoRI method." + ], + "image_footnote": [], + "bbox": [ + 181, + 102, + 380, + 224 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg", + "image_caption": [ + "(b) LoRI merging." + ], + "image_footnote": [], + "bbox": [ + 395, + 101, + 599, + 224 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg", + "image_caption": [ + "(c) LoRI continual learning.", + "Figure 2: Overview of the proposed LoRI method. (a) LoRI freezes the projection matrices $A_{t}$ and sparsely updates $B_{t}$ using task-specific masks $M_{t}$ . (b) LoRI enables adapter merging of multiple task-specific adapters with reduced parameter interference. (c) LoRI builds safety adapters by continual learning with reduced catastrophic forgetting." + ], + "image_footnote": [], + "bbox": [ + 607, + 101, + 823, + 226 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Method", + "text_level": 1, + "bbox": [ + 171, + 334, + 277, + 349 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.1 Freezing Low-Rank Projections with Sparse Masking", + "text_level": 1, + "bbox": [ + 169, + 368, + 609, + 383 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Freezing Projection $A$ . LoRA (Hu et al., 2021) fine-tunes a weight update matrix as a product of two low-rank matrices to adapt LLMs to new tasks. Formally, for a specific task $t$ , given a pretrained weight matrix $W_0 \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}$ , the weight update $\\Delta_t \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}$ is constrained to a low-rank decomposition:", + "bbox": [ + 169, + 393, + 823, + 455 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nh = x W _ {0} + x \\Delta_ {t} = x W _ {0} + x A _ {t} B _ {t}. \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 380, + 462, + 823, + 479 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $A_{t} \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times r}$ , $B_{t} \\in \\mathbb{R}^{r \\times d_{\\mathrm{out}}}$ , and $r \\ll \\min\\{d_{\\mathrm{in}}, d_{\\mathrm{out}}\\}$ . We denote $\\Delta_t$ as the LoRA adapter for task $t$ . In practice, LoRA adapters are typically applied to multiple projection matrices (e.g., $W_q, W_v$ ) within each transformer layer.", + "bbox": [ + 169, + 488, + 823, + 532 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Typically, the low-rank projection matrices $A_{t}$ and the low-rank expansion matrices $B_{t}$ are updated via gradient descent. Matrices $A_{t}$ are usually initialized with Kaiming Uniform distribution (He et al., 2015), while matrices $B_{t}$ are initialized to zero, ensuring that $\\Delta_{t} = 0$ at the start of training. However, in LoRI, we fix $A_{t}$ as random projections, meaning that the model only learns how to combine the fixed subspace via $B_{t}$ . By freezing $A_{t}$ , we eliminate the need to store their gradients and optimizer states, thereby reducing memory consumption. During inference, similar to LoRA, LoRI merges the low-rank updates by adding $A_{t}B_{t}$ to $W_{0}$ , ensuring no additional inference latency compared to full fine-tuning.", + "bbox": [ + 169, + 537, + 826, + 652 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Sparse Masking for Projection $B$ . LoRI freezes matrices $A_{t}$ and selectively updates only the most relevant parameters in $B_{t}$ for each task, as illustrated in Figure 2(a). For task $t$ , it first extracts sparse masks $M_{t}$ through a calibration process, then applies the masks to constrain training to a limited subset of parameters in $B_{t}$ . During mask calibration, LoRI updates $B_{t}$ without masking using a calibration dataset $\\mathcal{D}_t^C$ , sampled from the adaptation dataset $\\mathcal{D}_t$ . After this phase, LoRI collects all $B_{t}$ matrices from the model across layers and projections. Then it computes a global threshold $\\tau_t$ , defined as the $s\\%$ quantile of the absolute values of all elements from these matrices, where $s$ is the sparsity ratio. For each matrix $B_{t}$ , the corresponding sparse mask $M_{t}$ is computed as:", + "bbox": [ + 169, + 667, + 826, + 796 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nM _ {t} = \\mathbb {I} \\left(\\left| B _ {t} \\right| \\geq \\tau_ {t}\\right), \\quad \\text {w h e r e} \\quad \\tau_ {t} = \\operatorname {Q u a n t i l e} _ {s} \\left(\\bigcup \\left| B _ {t} \\right|\\right). \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 295, + 805, + 823, + 830 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, $\\mathbb{I}(\\cdot)$ denotes the indicator function applied element-wise. This ensures that only the top- $(1 - s)\\%$ of parameters (by magnitude) across all layers and projections are retained. The masks can also be derived using gradient-based measures such as the Fisher information matrix (Guo et al., 2023; Iurada et al., 2025) or SNIP score (Lee et al., 2018). However, these methods capture local sensitivity at a specific training step, whereas magnitude reflects cumulative importance over the entire fine-tuning process.", + "bbox": [ + 169, + 839, + 826, + 925 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "It is well established that the importance of projection matrices varies significantly across different layers and projections (Zhang et al., 2023a;d; Kopiczko et al., 2023). Our masking strategy enables global comparison of parameters and facilitates effective allocation of the parameter budget determined by the sparsity ratio. Notably, the masks for each task $t$ are calibrated only once and can be reused as needed.", + "bbox": [ + 169, + 103, + 823, + 174 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After mask calibration, LoRI resets $B_{t}$ to zero and trains on the adaptation dataset $\\mathcal{D}_t$ , with updates restricted to the masked parameters. The LoRI adapter is expressed as $\\Delta_t = A_t(B_t \\odot M_t)$ . The algorithm of LoRI is detailed in Appendix B. In practice, the sparsity ratio $s$ can reach up to 90%, meaning that only a small fraction of parameters in matrices $B_{t}$ are updated, while the majority remain unchanged. This selective adaptation enables the model to focus on modifying the most critical parameters needed for specific tasks, while preserving the foundational knowledge encoded in the pretrained base model. In the limiting case of a single task and zero sparsity, our method reduces to LoRA-FA (Zhang et al., 2023b), which has been shown to perform competitively with standard LoRA.", + "bbox": [ + 169, + 180, + 826, + 308 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.2 Reducing Interference in Adapter Merging via Orthogonality", + "text_level": 1, + "bbox": [ + 169, + 323, + 668, + 339 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Orthogonality of LoRI Adapters. A central challenge in adapter merging is parameter interference, where combining multiple adapters leads to degraded performance due to conflicting parameter updates. Given a set of trained LoRI adapters $\\{\\Delta_1,\\Delta_2,\\dots ,\\Delta_T\\}$ , the goal is to construct a unified model that combines knowledge from all tasks with minimal interference, as illustrated in Figure 2(b). Formally, we define the excess loss due to parameter interference for a specific task $t$ as:", + "bbox": [ + 169, + 349, + 823, + 434 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {t} = \\mathcal {L} _ {t} \\left(W _ {\\text {m e r g e}}\\right) - \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 370, + 438, + 823, + 455 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $W_{\\mathrm{merge}}$ is the merged model, $W_0$ is the pretrained weight matrix, $\\Delta_t$ is the LoRI adapter for task $t$ , $\\alpha_t \\in \\mathbb{R}$ is a scalar weight, and $\\mathcal{L}_t$ is the loss function for task $t$ . A high $\\mathcal{L}_t$ indicates significant interference.", + "bbox": [ + 169, + 459, + 823, + 503 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "LoRI mitigates this interference by leveraging approximate orthogonality, achieved by freezing the projection matrices $A_{t}$ as independent random matrices. This design leads to the following property, whose proof is provided in Appendix C:", + "bbox": [ + 169, + 508, + 823, + 551 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Property 1. Let $A_s, A_t \\in \\mathbb{R}^{d_{in} \\times r}$ be independent random matrices with i.i.d. entries drawn from a Kaiming Uniform distribution for distinct tasks $s \\neq t$ . Let their corresponding LoRI adapters be $\\Delta_s = A_s(B_s \\odot M_s)$ and $\\Delta_t = A_t(B_t \\odot M_t)$ , where the trained matrices $(B_s \\odot M_s)$ and $(B_t \\odot M_t)$ have finite Frobenius norms. Under the condition that $r \\ll d_{in}$ , as the input dimension $d_{in} \\to \\infty$ , the adapters are approximately orthogonal:", + "bbox": [ + 169, + 556, + 823, + 628 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\rightarrow 0 \\quad i n p r o b a b i l i t y. \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 393, + 632, + 823, + 650 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We describe two merging methods: concatenated merging (weighted averaging) and linear merging (Task Arithmetic) (Ilharco et al., 2022), both of which exploit the approximate orthogonality of LoRIs.", + "bbox": [ + 169, + 661, + 823, + 705 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Concatenated Merging (Weighted Averaging). This method constructs the merged model by creating a weighted sum of individual task adapters. This is achieved by concatenating the weighted $A$ and masked $B$ matrices:", + "bbox": [ + 169, + 719, + 823, + 762 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nA ^ {\\prime} = \\left[ \\alpha_ {1} A _ {1} \\alpha_ {2} A _ {2} \\dots \\alpha_ {T} A _ {T} \\right], \\quad B ^ {\\prime} = \\left[ \\left(B _ {1} \\odot M _ {1}\\right) ^ {\\top}, \\dots , \\left(B _ {T} \\odot M _ {T}\\right) ^ {\\top} \\right] ^ {\\top}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 241, + 768, + 823, + 797 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\alpha_{t} \\in \\mathbb{R}$ are scalar weights (e.g., uniform or task-prioritized). The final merged model is then formed by adding their product to the base model weights:", + "bbox": [ + 169, + 801, + 823, + 832 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nW _ {\\text {m e r g e}} = W _ {0} + A ^ {\\prime} B ^ {\\prime} = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t} \\left(B _ {t} \\odot M _ {t}\\right) = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} \\Delta_ {t}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 263, + 837, + 823, + 876 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "By summing approximately orthogonal adapters, we ensure that the updates for each task occupy largely disjoint subspaces, thereby reducing interference (Ilharco et al., 2022; OrtizJimenez et al., 2023; Xiong et al., 2024).", + "bbox": [ + 169, + 881, + 823, + 925 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The reduction in interference can be explained by a theoretical sketch based on two key assumptions. The first is the local linearity of the loss landscape (Li et al., 2018), which allows for a first-order Taylor approximation. The second is the gradient alignment assumption, formally expressed as $\\nabla \\mathcal{L}_t(W_0 + \\alpha_t\\Delta_t)\\propto \\Delta_t$ . This posits that at a task's solution, the direction of steepest descent is primarily aligned with the adapter updates already made for that task. Under these assumptions, the excess loss $\\mathcal{I}_t$ is approximately the inner product of the gradient and the updates from the other tasks:", + "bbox": [ + 169, + 103, + 826, + 203 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {I} _ {t} \\approx \\left\\langle \\nabla \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\sum_ {s \\neq t} \\alpha_ {s} \\Delta_ {s} \\right\\rangle_ {F} \\propto \\sum_ {s \\neq t} \\alpha_ {k} \\left\\langle \\Delta_ {t}, \\Delta_ {s} \\right\\rangle_ {F}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 302, + 210, + 823, + 253 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Since Property 1 establishes that $\\langle \\Delta_t, \\Delta_s \\rangle_F \\to 0$ for $s \\neq t$ , the total interference loss becomes negligible: $\\mathcal{I}_t \\approx 0$ . This heuristic argument provides strong intuition for why concatenated merging is effective, which is then validated by our empirical results.", + "bbox": [ + 169, + 258, + 823, + 305 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Linear Merging (Task Arithmetic). Alternatively, the merged model can be formed by summing the $A_{t}$ and masked $B_{t}$ matrices independently before multiplication:", + "bbox": [ + 169, + 319, + 823, + 349 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nW _ {\\text {m e r g e}} = W _ {0} + \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t}\\right) \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} \\left(B _ {t} \\odot M _ {t}\\right)\\right) = W _ {0} + \\sum_ {s = 1} ^ {T} \\sum_ {t = 1} ^ {T} \\alpha_ {s} \\alpha_ {t} A _ {s} \\left(B _ {t} \\odot M _ {t}\\right). \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 209, + 354, + 823, + 397 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "While concatenated merging directly sums approximately orthogonal adapters, this linear merging approach introduces problematic cross-terms $\\alpha_{s}\\alpha_{t}A_{s}(B_{t}\\odot M_{t})$ for $s\\neq t$ . These terms cause interference because components like $\\{A_s(B_t\\odot M_t)\\}_{t = 1}^T$ for a fixed $s$ are generally not mutually orthogonal. As a result, concatenated merging offers a cleaner and empirically more effective strategy for combining LoRI adapters.", + "bbox": [ + 169, + 404, + 823, + 479 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "2.3 Reducing Interference in Continual Learning via Sparsity", + "text_level": 1, + "bbox": [ + 169, + 494, + 642, + 510 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Safety-Preserving Adapters. For safety-critical applications, ensuring that new task adaptations do not compromise established safety behaviors is crucial. Therefore, each newly introduced adapter must preserve the base model's safety alignment. A straightforward approach to achieve this is to merge a safety LoRI adapter into the deployed model during every inference. However, as we will show in Section 3.4, this method may be insufficient for scenarios that demand strong safety guarantees. In such cases, as illustrated in Figure 2(c), a more reliable solution is to adopt a two-phase continual learning process for each LoRI adapter to reinforce safety:", + "bbox": [ + 169, + 520, + 826, + 633 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Safety Alignment Phase: Train a LoRI adapter on a curated safety dataset $\\mathcal{D}_{\\text{safety}}$ , yielding $\\Delta_{\\text{safety}} = A(B_{\\text{safety}} \\odot M_{\\text{safety}})$ .", + "2. Task Adaptation Phase: Fine-tune $\\Delta_{\\mathrm{safety}}$ on each task adaptation dataset $D_t, t = 1, 2, \\ldots, T$ , reusing the calibrated task-specific masks $M_t$ , resulting in safety-preserving adapters $\\Delta_t = A(B_t \\odot M_t)$ ." + ], + "bbox": [ + 189, + 643, + 823, + 728 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This method does not require recalibrating masks for each task or performing multiple rounds of continual learning. Notably, we do not enforce non-overlapping masks $M_t \\cap M_{\\text{safety}} = \\emptyset$ . Enforcing such a constraint would require recalibrating masks after the safety alignment phase due to the reduced parameter space, and could potentially degrade performance on downstream tasks. The expected overlap between sparse masks with $90\\%$ sparsity is theoretically $1\\%$ . Empirically, we find that this expectation holds: the average overlap between task-specific masks is indeed $\\sim 1\\%$ , without explicitly enforcing non-overlap. This slight overlap allows important parameters to be shared across tasks, potentially enabling positive knowledge transfer.", + "bbox": [ + 169, + 737, + 826, + 867 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Catastrophic Forgetting. Continual learning models are vulnerable to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), where updates for new tasks can overwrite and degrade previously learned knowledge. Despite the slight overlap between", + "bbox": [ + 169, + 881, + 823, + 926 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "task-specific masks, the sparsity in $B_{t}$ induced by $M_{t}$ enables LoRI to facilitate isolated parameter updates for safety alignment and task adaptation. As a result, LoRI minimizes cross-task interference and mitigates catastrophic forgetting in safety alignment.", + "bbox": [ + 169, + 103, + 823, + 148 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3 Experiments", + "text_level": 1, + "bbox": [ + 171, + 171, + 318, + 188 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.1 Experimental Setup", + "text_level": 1, + "bbox": [ + 171, + 205, + 362, + 222 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Datasets. We conduct a series of experiments to evaluate LoRI's effectiveness on single-task and multi-task settings, including adapter merging and continual learning. We focus on four capabilities: (i) Natural Language Understanding (NLU): LoRI is trained on the aggregation of eight NLU datasets (Hu et al., 2023), including BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), ARC-Challenge (Clark et al., 2018), ARC-Easy (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), and Winogrande (Sakaguchi et al., 2021). We evaluate accuracy on the individual test split for each dataset. (ii) Mathematical Reasoning (Math): LoRI is trained on the GSM8K (Cobbe et al., 2021) training split and evaluated on the GSM8K test split. (iii) Code Generation (Code): LoRI is trained on CodeAlpaca (Chaudhary, 2023) and evaluated using pass@1, pass@5, and pass@10 on HumanEval (Chen et al., 2021). (iv) Safety Alignment (Safety): LoRI is trained on Saferpaca (Bianchi et al., 2023), which extends Alpaca-Cleaned (Taori et al., 2023) with 2,000 safety instructions. Safety performance is assessed by measuring the refusal rate on harmful queries from HEX-PHI (Qi et al., 2023).", + "bbox": [ + 169, + 234, + 826, + 429 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines. In single-task experiments, we compare LoRI with full fine-tuning (FFT), LoRA (Hu et al., 2021), and DoRA (Liu et al., 2024). Results for additional PEFT baselines, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), and LoRA+ (Hayou et al., 2024), are available in Appendix E.1. In merging experiments, we compare LoRI merging with several LoRA merging methods, including concatenated merging, linear merging (Ilharco et al., 2022), magnitude pruning, TIES-Merging (Yadav et al., 2023), and DARE (Yu et al., 2024). Magnitude pruning, TIES, and DARE are pruning-based approaches that apply sparsification to the $A$ and $B$ matrices before merging, based on a specified density. Magnitude pruning removes low-magnitude parameters; TIES-Merging further merges weights with consistent signs; and DARE performs random pruning followed by rescaling. For fair comparison, all baseline results are reproduced using a consistent experimental setup.", + "bbox": [ + 169, + 449, + 826, + 632 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. We use Llama-3-8B (Grattafori et al., 2024) and Mistral7B (Jiang et al., 2023) as base models. We conduct all experiments on 8 NVIDIA A5000 GPUs. To explore the impact of sparsity, we provide two variants of LoRI: LoRI-D, which uses dense $B$ matrices, and LoRI-S, which applies $90\\%$ sparsity to $B$ . Sparsity is implemented by masking the gradients of $B$ during backpropagation. For optimal performance, we use the entire adaptation dataset as the calibration dataset for each task. Ablation results for calibration are presented in Section 3.5. For consistency, we use the same hyperparameters for PEFT baselines as for LoRI-D. For all adapter merging experiments, uniform weights $\\alpha_{t}$ are employed across all adapters. The weights $\\alpha_{t}$ are treated as hyperparameters, and their ablation study is detailed in Section 3.5. Detailed hyperparameter settings are provided in Appendix D.", + "bbox": [ + 169, + 650, + 826, + 805 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.2 Single-Task Performance", + "text_level": 1, + "bbox": [ + 171, + 825, + 401, + 842 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1 presents single-task performance on eight NLU benchmarks, while Table 2 reports single-task performance on the math, code, and safety benchmarks. Results for additional PEFT baselines are available in Appendix E.1. The rank for our experiments is set to $r = 32$ . We observed stable performance across different ranks, with additional results for $r = 64$ provided in Appendix E.2.", + "bbox": [ + 169, + 853, + 823, + 926 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/dea191fa48023f37272a1a49db3c1212719759aabc6031117e5a8d2063f6b2fd.jpg", + "table_caption": [ + "Table 1: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 and Mistral with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA84M (1.15%)75.290.182.982.992.088.795.188.186.9
DoRA85M (1.16%)75.890.482.983.392.690.696.387.987.5
LoRI-D44M (0.60%)75.990.683.083.691.988.495.987.487.1
LoRI-S4.4M (0.06%)74.090.182.682.691.590.895.587.586.8
", + "bbox": [ + 174, + 154, + 823, + 320 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/574b386d7041c8bb5c61c6f9568f32dc489aad22da229933747d96a0c22a481e.jpg", + "table_caption": [ + "Table 2: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best." + ], + "table_footnote": [], + "table_body": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
Mistral-7B
FFT7.24G (100%)55.529.138.540.494.1
LoRA84M (1.15%)57.833.842.445.391.9
DoRA85M (1.16%)57.533.742.646.895.3
LoRI-D44M (0.60%)58.033.842.045.194.7
LoRI-S4.4M (0.06%)57.133.743.648.195.9
", + "bbox": [ + 233, + 409, + 767, + 604 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "While full fine-tuning (FFT) updates all model parameters, LoRA and DoRA reduce the number of trainable parameters to approximately $1\\%$ . LoRI-D further reduces this to about $0.5\\%$ by freezing matrices $A$ , and LoRI-S pushes this reduction to $0.05\\%$ by applying $90\\%$ sparsity to matrices $B$ , achieving a $95\\%$ reduction in trainable parameters compared to LoRA. Despite tuning fewer parameters, LoRI-D and LoRI-S achieve performance comparable to - and even better than - LoRA and DoRA on NLU, math, code, and safety tasks. LoRI-D generally outperforms LoRI-S slightly, due to the extremely limited parameter budget in LoRI-S. Remarkably, LoRI-D and LoRI-S consistently outperform FFT, LoRA, and DoRA on code generation tasks. On HumanEval with Llama-3, LoRI-D achieves a pass@10 score of $63.2\\%$ , outperforming LoRA by $24.4\\%$ . LoRI-S achieves $59.6\\%$ pass@10, exceeding LoRA by $17.3\\%$ .", + "bbox": [ + 169, + 651, + 826, + 805 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The strong performance of LoRI-D suggests that effective adaptation can be achieved without updating $A$ , while the strong performance of LoRI-S indicates that $B$ contains substantial parameter redundancy. LoRI's performance gains are attributed to the principled use of sparsity, which serves as a strong regularizer during adaptation. Additionally, LoRI preserves latent task-specific knowledge embedded in the pretrained model. This supports the view that supervised fine-tuning (SFT) primarily unlocks capabilities already present in pretrained models, rather than introducing new ones, which is consistent with findings from Liu et al. (2024); Yu et al. (2024).", + "bbox": [ + 169, + 811, + 826, + 925 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 491, + 946, + 504, + 959 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/6f8e6b8bbcc547d245480d4bd82a15987de386e346f1158ef17702d74fdc3063.jpg", + "table_caption": [ + "Table 3: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
ConcatLoRA85.057.813.020.022.384.4
LinearLoRA84.854.114.220.823.379.4
MagnitudeLoRA81.950.324.136.742.474.4
TIESLoRA72.624.032.546.351.777.8
DARELoRA79.148.934.148.753.574.1
ConcatLoRI-D83.255.840.556.962.286.6
LinearLoRI-D82.553.840.954.960.385.9
ConcatLoRI-S81.245.234.348.754.084.7
LinearLoRI-S79.141.323.236.642.378.8
", + "bbox": [ + 202, + 183, + 797, + 353 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.3 Adapter Merging", + "text_level": 1, + "bbox": [ + 171, + 380, + 344, + 398 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We consider four heterogeneous tasks for LoRA and LoRI merging: NLU, math, code, and safety. This setting is generally more challenging than merging homogeneous adapters, such as merging multiple NLU adapters. Table 3 presents results for merging LoRAs and LoRIs on these four tasks. For LoRI, we apply concatenated and linear merging to the LoRI-D and LoRI-S variants. Pruning-based methods such as magnitude pruning, TIES, and DARE are not applied to LoRI, since these methods will prune the $A$ matrices as LoRI already sparsifies $B$ , resulting in an inconsistent pruning scheme across $A$ and $B$ . Additional results, including experiments on merging three adapters and evaluations of pruning-based methods on LoRI, are provided in Appendix E.4 and E.5.", + "bbox": [ + 169, + 407, + 826, + 536 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As shown in Table 3, directly merging LoRAs results in substantial performance degradation, particularly for code generation and safety alignment. Although pruning-based methods (e.g., DARE, TIES) improve code performance, they often compromise accuracy on other tasks. In contrast, LoRI achieves consistently strong performance across all tasks.", + "bbox": [ + 169, + 540, + 823, + 598 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Concatenated merging with LoRI-D achieves the best overall performance, closely matching the single-task baseline, which indicates minimal interference between LoRI adapters. For instance, it achieves $62.2\\%$ pass@10 on HumanEval and an $86.6\\%$ refusal rate on HExPHI. Despite using only $5\\%$ of the parameters of LoRA, LoRI-S retains competitive performance. Notably, on code and safety tasks, concatenated merging with LoRI-S outperforms all LoRA merging methods.", + "bbox": [ + 169, + 603, + 823, + 688 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Linear merging with LoRI also performs competitively, though it lags slightly behind concatenated merging due to cross-term interactions that introduce some interference. LoRI eliminates the need for manual selection of merging methods: simple concatenated merging yields strong results. The choice between LoRI-D and LoRI-S can then be guided by the desired trade-off between performance and parameter efficiency. We also note an important trade-off between code generation performance and other domains during adapter merging, a phenomenon further explored in Section 3.5.", + "bbox": [ + 169, + 693, + 826, + 792 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "3.4 Continual Learning", + "text_level": 1, + "bbox": [ + 171, + 811, + 362, + 829 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "While merging adapters enables multi-task capabilities, it falls short of providing robust safety alignment in scenarios that demand strong safety guarantees. As shown in Table 3, the highest refusal rate on HEx-PHI achieved through LoRA or LoRI merging is $86.6\\%$ . To address this limitation, we adopt a two-phase training process: first, a safety adapter is trained on the safety alignment dataset Saerpaca; then, it is individually adapted to each downstream task, including NLU, math, and code.", + "bbox": [ + 169, + 839, + 826, + 925 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg", + "image_caption": [ + "Figure 3: Continual learning results from safety to NLU, math, and code domains. Results for NLU are averaged over eight tasks. GSM8K accuracy, HumanEval pass@10, and HEX-PHI refusal rate are reported individually. Base model: Llama-3-8B, rank $r = 32$ ." + ], + "image_footnote": [], + "bbox": [ + 181, + 104, + 823, + 252 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 200, + 321, + 485, + 455 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 318, + 795, + 455 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg", + "image_caption": [ + "(a) Effect of calibration steps.", + "(c) Effect of mask granularities.", + "Figure 4: Ablation studies across different settings. Base model: Llama-3-8B, rank $r = 32$ . Additional ablation studies are provided in Appendix F." + ], + "image_footnote": [], + "bbox": [ + 189, + 479, + 475, + 613 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg", + "image_caption": [ + "(b) Sparsity ratios across layers and projections.", + "(d) Effect of merging weights." + ], + "image_footnote": [], + "bbox": [ + 519, + 479, + 808, + 613 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Figure 3 presents results from these continual learning experiments. LoRA exhibits severe catastrophic forgetting on safety alignment – particularly in the safety $\\rightarrow$ NLU experiment – likely due to the large size of the NLU training split ( $\\sim 170\\mathrm{k}$ examples). Among all methods, LoRI-S achieves the best preservation of safety alignment, even outperforming single-task LoRI-D. This is due to its $90\\%$ sparsity in the $B$ matrices, which enables isolated parameter updates between the initial safety alignment and subsequent task adaptations. LoRI-D also shows some resistance to forgetting, benefiting from frozen $A$ matrices. For task adaptation, LoRI-D generally outperforms LoRI-S, as the latter's aggressive sparsity limits its adaptation capacity. Overall, LoRI offers a lightweight and effective approach to building safety adapters that preserve alignment while supporting adaptation to downstream tasks.", + "bbox": [ + 169, + 698, + 826, + 851 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "3.5 Ablation Studies", + "text_level": 1, + "bbox": [ + 171, + 869, + 339, + 883 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Calibration Steps. Calibration steps refer to the number of update steps used to generate sparse masks for each task. Figure 4(a) shows how performance of LoRI-S changes with", + "bbox": [ + 169, + 895, + 823, + 925 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 948, + 503, + 958 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "different numbers of calibration steps on math and code tasks. We observe that performance generally improves as the number of calibration steps increases. Since the masks only need to be calibrated once per task and can be reused, we use the entire adaptation dataset as the calibration dataset to achieve the best performance.", + "bbox": [ + 169, + 103, + 823, + 161 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Sparsity Ratio. We use model-wise masks in our experiments that retain the highest-magnitude parameters across all layers and projections. Figure 4(b) presents the sparsity ratios of different projection types (e.g., up, down, key, value) across layers under a $90\\%$ sparsity on GSM8K. We observe that feedforward (FFN) projections tend to retain more parameters (i.e., lower sparsity) than self-attention projections, indicating they are more critical for adaptation. Additionally, the top layers are less sparse than lower layers, suggesting that the top layers play a more important role in adaptation.", + "bbox": [ + 169, + 175, + 826, + 276 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Mask Granularity. We compare five levels of mask granularity under $90\\%$ sparsity on GSM8K, as shown in Figure 4(c). We compare module-wise, projection-wise, layer-wise, and matrix-wise masking against our model-wise masking, where parameters are selected within progressively smaller scopes. We find that coarse-grained masking (e.g., model-wise) yields the best performance, while fine-grained masking (e.g., matrix-wise) results in degradation. This suggests that global magnitude-based selection enables better parameter allocation, as the importance of projection matrices varies across the model.", + "bbox": [ + 169, + 287, + 826, + 388 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Merging Weights. We adopt uniform weights across all adapters for adapter merging, rather than task-specific weights, as we do not wish to prioritize any individual task. Figure 4(d) shows the effect of different merging weights (0.2, 0.3, 0.4) for concatenated merging with LoRI-S. We observe that LoRI is moderately sensitive to merging weights, with a noticeable trade-off between performance on code tasks and other domains. We adopt 0.3 for all adapters in LoRI-S merging, as it offers a balanced performance across domains.", + "bbox": [ + 169, + 401, + 826, + 488 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4 Conclusion", + "text_level": 1, + "bbox": [ + 171, + 506, + 308, + 523 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In this work, we introduced LoRI, a simple yet effective approach to parameter-efficient fine-tuning (PEFT) that substantially reduces trainable parameters while minimizing cross-task interference. By freezing the projection matrices $A$ as random projections and sparsifying $B$ using task-specific masks, LoRI achieves strong single-task performance across diverse domains – including natural language understanding, mathematical reasoning, code generation, and safety alignment – while reducing trainable parameters by up to $95\\%$ compared to LoRA. Furthermore, LoRI enables training-free adapter merging with minimal performance degradation, and supports continual learning with significantly reduced catastrophic forgetting. It also provides a lightweight approach to building safety adapters that preserve the safety alignment of the base model.", + "bbox": [ + 169, + 537, + 826, + 679 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Future Work. We identify several promising avenues for extending this work. While LoRI currently leverages unstructured magnitude-based sparsity, future research can explore structured sparsity patterns – such as block sparsity, head pruning, or group-wise masking – which may offer better hardware compatibility. Additionally, although this study focuses on LLMs, the core design of LoRI is modality-agnostic. Extending LoRI to diffusion and vision-language models for multi-modal generation is a promising direction.", + "bbox": [ + 169, + 691, + 826, + 779 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 171, + 797, + 357, + 816 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "This material is based upon work partially supported by the NSF Grant No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", + "bbox": [ + 169, + 829, + 825, + 887 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 488, + 946, + 509, + 960 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 102, + 274, + 117 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions. arXiv preprint arXiv:2309.07875, 2023.", + "Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020.", + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020.", + "Rich Caruana. Multitask learning. Machine learning, 28:41-75, 1997.", + "Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation, 2023.", + "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.", + "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023.", + "Alexandra Chronopoulou, Matthew E Peters, Alexander Fraser, and Jesse Dodge. *Adaptersoup: Weight averaging to improve generalization of pretrained language models.* arXiv preprint arXiv:2302.07027, 2023.", + "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.", + "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.", + "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.", + "Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696, 2023.", + "Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition. arXiv preprint arXiv:2310.05492, 2023.", + "Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, and Jia Li. Parameter-efficient fine-tuning with discrete fourier transform. arXiv preprint arXiv:2405.03003, 2024.", + "Aaron Grattaftori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.", + "Han Guo, Philip Greengard, Eric P Xing, and Yoon Kim. Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning. arXiv preprint arXiv:2311.12023, 2023." + ], + "bbox": [ + 171, + 125, + 825, + 924 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models. arXiv preprint arXiv:2402.12354, 2024.", + "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.", + "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Larous-silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790-2799. PMLR, 2019.", + "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.", + "Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka-Wei Lee. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933, 2023.", + "Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269, 2023.", + "Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv preprint arXiv:2212.04089, 2022.", + "Leonardo Iurada, Marco Ciccone, and Tatiana Tommasi. Efficient model editing with task-localized sparse fine-tuning. arXiv preprint arXiv:2504.02620, 2025.", + "Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.", + "Damjan Kalajdzievski. A rank stabilization scaling factor for fine-tuning with lora. arXiv preprint arXiv:2312.03732, 2023.", + "Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. Parameter-level soft-masking for continual learning. In International Conference on Machine Learning, pp. 17492-17505. PMLR, 2023.", + "Dawid J Kopiczko, Tijmen Blankevoort, and Yuki M Asano. Vera: Vector-based random matrix adaptation. arXiv preprint arXiv:2310.11454, 2023.", + "Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018.", + "Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021.", + "Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018.", + "Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021.", + "Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017.", + "Zujie Liang, Feng Wei, Yin Jie, Yuxi Qian, Zhenghong Hao, and Bing Han. Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 277-292, 2023." + ], + "bbox": [ + 171, + 102, + 825, + 925 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, 35:1950-1965, 2022.", + "Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. In *Forty-first International Conference on Machine Learning*, 2024.", + "Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021.", + "David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017.", + "Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023.", + "Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7765-7773, 2018.", + "Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems, 35:17703-17716, 2022.", + "Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989.", + "Fanxu Meng, Zhaohui Wang, and Muhan Zhang. Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37:121038-121072, 2024.", + "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.", + "Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 36:66727-66754, 2023.", + "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022.", + "Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, and Pra-tek Mittal. Lottery ticket adaptation: Mitigating destructive interference in llms. arXiv preprint arXiv:2406.16797, 2024.", + "Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020.", + "Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, and Samy Jelassi. Lora soups: Merging loras for practical skill composition tasks. arXiv preprint arXiv:2410.13025, 2024.", + "Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693, 2023." + ], + "bbox": [ + 171, + 102, + 825, + 924 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic forgetting in neural networks. In International Conference on Learning Representations, 2021.", + "David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019.", + "Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.", + "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9): 99-106, 2021.", + "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019.", + "Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018.", + "Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017.", + "George Stoica, Pratik Ramesh, Boglarka Ecsedi, Leshem Choshen, and Judy Hoffman. Model merging with svd to tie the knots. arXiv preprint arXiv:2410.19735, 2024.", + "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023.", + "Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, and Cheng-Zhong Xu. Hydralora: An asymmetric lora architecture for efficient fine-tuning. Advances in Neural Information Processing Systems, 37:9565-9584, 2024.", + "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.", + "Hanqing Wang, Bowen Ping, Shuo Wang, Xu Han, Yun Chen, Zhiyuan Liu, and Maosong Sun. Lora-flow: Dynamic lora fusion for large language models in generative tasks. arXiv preprint arXiv:2402.11455, 2024a.", + "Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024b.", + "Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023.", + "Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan Fang Li, Guilin Qi, and Gholamreza Haffari. Pretrained language model in continual learning: A comparative study. In International Conference on Learning Representations 2022. OpenReview, 2022.", + "Xun Wu, Shaohan Huang, and Furu Wei. Mixture of lora experts. arXiv preprint arXiv:2404.13628, 2024.", + "Feng Xiong, Runxi Cheng, Wang Chen, Zhanqiu Zhang, Yiwen Guo, Chun Yuan, and Ruifeng Xu. Multi-task model merging via adaptive weight disentanglement. arXiv preprint arXiv:2411.18729, 2024." + ], + "bbox": [ + 171, + 102, + 825, + 924 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36:7093-7115, 2023.", + "Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*, 2024.", + "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.", + "Feiyu Zhang, Liangzhi Li, Junhao Chen, Zhouqiang Jiang, Bowen Wang, and Yiming Qian. Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning. arXiv preprint arXiv:2308.12043, 2023a.", + "Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, and Bo Li. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv preprint arXiv:2308.03303, 2023b.", + "Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, and Bohan Zhuang. Loraprune: Pruning meets low-rank parameter-efficient fine-tuning. arXiv preprint arXiv:2305.18403, 2023c.", + "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512, 2023d.", + "Hongyun Zhou, Xiangyu Lu, Wang Xu, Conghui Zhu, Tiejun Zhao, and Muyun Yang. Lora-drop: Efficient lora parameter pruning based on output evaluation. arXiv preprint arXiv:2402.07721, 2024.", + "Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez De Ocariz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, and Justin Solomon. Asymmetry in low-rank adapters of foundation models. arXiv preprint arXiv:2402.16842, 2024." + ], + "bbox": [ + 171, + 102, + 825, + 559 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A Related Works", + "text_level": 1, + "bbox": [ + 171, + 101, + 341, + 117 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Parameter-Efficient Fine-Tuning. Parameter-efficient fine-tuning (PEFT) methods for LLMs (Houlsby et al., 2019; Pfeiffer et al., 2020; Li & Liang, 2021; Lester et al., 2021; Liu et al., 2021; Hu et al., 2021) have received increasing attention in recent years. Among them, LoRA (Hu et al., 2021), which introduces trainable low-rank matrices, has become one of the most widely adopted PEFT methods due to its strong performance and efficiency. LoRI is motivated by reducing parameter redundancy in LoRA through an asymmetric design: we freeze the projection matrices $A$ and enforce sparsity on the matrices $B$ . Our work is closely related to several lines of research. In terms of parameter efficiency, our goal is shared by methods such as IA3 (Liu et al., 2022), VeRA (Kopiczko et al., 2023), and FourierFT (Gao et al., 2024). More specifically, our approach builds on the concept of asymmetric LoRA variants, which has been explored in works like LoRA-FA (Zhang et al., 2023b), AsymmetryLoRA (Zhu et al., 2024), and HydraLoRA (Tian et al., 2024). However, LoRI is distinct from these works by uniquely combining frozen $A$ with sparsely updated $B$ . This targeted, asymmetric pruning of only the $B$ matrices also differentiates our method from general LoRA pruning techniques like Loraprune (Zhang et al., 2023c), LoRADrop (Zhou et al., 2024), and SoRA (Ding et al., 2023), as well as SVD-based approaches such as AdaLoRA (Zhang et al., 2023d) and PiSSA (Meng et al., 2024).", + "bbox": [ + 169, + 133, + 826, + 372 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Model Merging. Achieving multi-task capabilities typically involves training on a mixture of diverse task datasets (Caruana, 1997; Sener & Koltun, 2018), which is often prohibitively expensive in time and compute. As an alternative, model merging has gained attention for combining multiple task-specific models into a single model (Matena & Raffel, 2022; Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). Fisher Merging (Matena & Raffel, 2022) uses weights from the Fisher information matrix to combine parameters, while Task Arithmetic (Ilharco et al., 2022) employs predefined scaling factors. TIES-Merging (Yadav et al., 2023) prunes low-magnitude parameters and merges those with consistent signs, and DARE (Yu et al., 2024) applies random pruning with rescaling. However, identifying the optimal merging method often requires trial and error. More recently, there has been growing interest in merging task-specific LoRA adapters (Chronopoulou et al., 2023; Huang et al., 2023; Wu et al., 2024; Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024), often utilizing Mixture-of-Experts (MoE) architectures. Nonetheless, these methods typically require additional training to coordinate the adapters effectively. In contrast, LoRI eliminates the need for manual selection of merging methods or additional training. By ensuring approximate orthogonality between adapters, LoRI minimizes interference and preserves task-specific performance.", + "bbox": [ + 169, + 388, + 826, + 627 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Catastrophic Forgetting. Catastrophic forgetting is a fundamental challenge in continual learning (McCloskey & Cohen, 1989; Ramasesh et al., 2021; Liang et al., 2023; Wang et al., 2024b), where neural networks struggle to retain previously learned knowledge when adapting to new tasks. Wu et al. (2022) analyzed this phenomenon using layer-wise and task-wise probing to assess knowledge retention across tasks. Several studies (Dong et al., 2023; Luo et al., 2023) have empirically examined catastrophic forgetting in the continual fine-tuning of LLMs. To mitigate catastrophic forgetting, various approaches have been proposed. Rehearsal-based methods (Rolnick et al., 2019; Shin et al., 2017) store or generate past data to reinforce prior knowledge during training. Parameter isolation methods (Rusu et al., 2016; Mallya & Lazebnik, 2018; Konishi et al., 2023; Panda et al., 2024) allocate separate subnetworks or sparsely mask parameters for different tasks to prevent interference. Additionally, O-LoRA (Wang et al., 2023) learns tasks in distinct low-rank subspaces while ensuring orthogonality between them. LoRI falls under the category of parameter isolation methods, leveraging sparse task-specific masks to mitigate catastrophic forgetting during continual learning.", + "bbox": [ + 169, + 643, + 826, + 853 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B Algorithm of LoRI", + "text_level": 1, + "bbox": [ + 171, + 875, + 375, + 893 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The full procedure of LoRI is summarized in Algorithm 1.", + "bbox": [ + 171, + 907, + 591, + 925 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 15 + }, + { + "type": "code", + "sub_type": "algorithm", + "code_caption": [ + "Algorithm 1: LoRA with Reduced Interference (LoRI)" + ], + "code_body": "Require: Task $t$ , mask calibration dataset $\\mathcal{D}_t^C$ , adaptation dataset $\\mathcal{D}_t$ , sparsity ratio $s$ , model $f$ loss function $\\mathcal{L}_t$ , learning rate $\\eta_t$ \n1: for each layer $l = 1,\\ldots ,L$ do \n2: for each projection $m = 1,\\dots ,M$ do \n3: Initialize: $A_{t}^{(l,m)}\\in \\mathbb{R}^{d_{\\mathrm{in}}\\times r}\\leftarrow \\mathcal{U}(-\\sqrt{\\frac{3}{d_{\\mathrm{in}}}},\\sqrt{\\frac{3}{d_{\\mathrm{in}}}}),B_{t}^{(l,m)}\\in \\mathbb{R}^{r\\times d_{\\mathrm{out}}}\\leftarrow 0$ \n4: end for \n5: end for \n6: for each batch $(x,y)$ sampled from $\\mathcal{D}_t^C$ do ▷ Calibration steps \n7: for each $(l,m)$ do \n8: $B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))$ \n9: end for \n10: end for \n11: $\\tau_t\\gets \\mathrm{Quantile}_s\\left(\\bigcup_{l,m}|B_t^{(l,m)}|\\right)$ ▷ Compute global threshold $\\tau_t$ \n12: for each $(l,m)$ do \n13: $M_t^{(l,m)}\\gets \\mathbb{I}\\left(|B_t^{(l,m)}|\\geq \\tau_t\\right)$ ▷ Generate mask for top- $(1 - s)\\%$ entries \n14: $B_{t}^{(l,m)}\\gets 0$ ▷ Reset to zero before adaptation \n15: end for \n16: for each batch $(x,y)$ sampled from $\\mathcal{D}_t$ do ▷ Adaptation steps \n17: for each $(l,m)$ do \n18: $B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\left(\\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))\\odot M_{t}^{(l,m)}\\right)$ \n19: end for \n20: end for", + "bbox": [ + 173, + 140, + 825, + 510 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C Proof of Property 1", + "text_level": 1, + "bbox": [ + 171, + 536, + 377, + 555 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Proof. Our goal is to show that the Frobenius inner product $\\langle \\Delta_s, \\Delta_t \\rangle_F$ converges to zero in probability. Let $\\tilde{B}_s = B_s \\odot M_s$ and $\\tilde{B}_t = B_t \\odot M_t$ . The inner product is given by:", + "bbox": [ + 169, + 569, + 823, + 602 + ], + "page_idx": 16 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} = \\operatorname {T r} \\left(\\Delta_ {s} ^ {\\top} \\Delta_ {t}\\right) = \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} A _ {s} ^ {\\top} A _ {t} \\tilde {B} _ {t}\\right). \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 349, + 604, + 823, + 626 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We will prove this by showing that the random matrix $X = A_{s}^{\\top}A_{t}$ converges to the zero matrix in probability as $d_{\\mathrm{in}} \\to \\infty$ .", + "bbox": [ + 169, + 631, + 823, + 662 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Let $a_{s}^{k}, a_{t}^{l} \\in \\mathbb{R}^{d_{\\mathrm{in}}}$ be the $k$ -th and $l$ -th columns of $A_{s}$ and $A_{t}$ , respectively. The entries of these vectors are i.i.d. from a Kaiming Uniform distribution $U[-a, a]$ where $a = \\sqrt{3 / d_{\\mathrm{in}}}$ . This implies a mean of 0 and variance of $\\sigma^2 = a^2 / 3 = 1 / d_{\\mathrm{in}}$ . An entry of $X$ is the inner product $X_{kl} = (a_{s}^{k})^{\\top} a_{t}^{l} = \\sum_{i=1}^{d_{\\mathrm{in}}} (A_{s})_{ik} (A_{t})_{il}$ .", + "bbox": [ + 169, + 667, + 825, + 737 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Let $Z_{i} = (A_{s})_{ik}(A_{t})_{il}$ . The terms $Z_{i}$ are i.i.d. with $\\mathbb{E}[Z_i] = \\mathbb{E}[(A_s)_{ik}]\\mathbb{E}[(A_t)_{il}] = 0$ . Each term is bounded: $|Z_{i}| \\leq a^{2} = 3 / d_{\\mathrm{in}}$ . We apply Hoeffding's inequality to the sum $\\sum_{i=1}^{d_{\\mathrm{in}}} Z_{i}$ , where each term lies in $[-3 / d_{\\mathrm{in}}, 3 / d_{\\mathrm{in}}]$ :", + "bbox": [ + 169, + 742, + 825, + 792 + ], + "page_idx": 16 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbb {P} \\left(\\left| X _ {k l} \\right| \\geq t\\right) = \\mathbb {P} \\left(\\left| \\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} Z _ {i} \\right| \\geq t\\right) \\leq 2 \\exp \\left(\\frac {- 2 t ^ {2}}{\\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} (6 / d _ {\\mathrm {i n}}) ^ {2}}\\right) = 2 \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 199, + 797, + 825, + 840 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We now bound the probability that any of the $r^2$ entries of $X$ exceeds a threshold $t$ using the union bound:", + "bbox": [ + 169, + 852, + 823, + 882 + ], + "page_idx": 16 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbb {P} \\left(\\max _ {k, l} | X _ {k l} | \\geq t\\right) = \\mathbb {P} \\left(\\bigcup_ {k, l = 1} ^ {r} \\{| X _ {k l} | \\geq t \\}\\right) \\leq \\sum_ {k, l = 1} ^ {r} \\mathbb {P} \\left(| X _ {k l} | \\geq t\\right) \\leq 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 181, + 885, + 825, + 928 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 16 + }, + { + "type": "table", + "img_path": "images/64c8ddd644dd9eebd26a8802b40d9d415be03562dcaf162028b63887cd978290.jpg", + "table_caption": [ + "Table 4: Hyperparameter settings for LoRI on NLU datasets." + ], + "table_footnote": [], + "table_body": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-41e-51e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
", + "bbox": [ + 176, + 126, + 823, + 290 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "We can now show that $\\| X \\|_F$ is small with high probability. Let the failure probability be $\\delta$ . By setting the bound from the previous step to $\\delta$ , we can solve for $t$ :", + "bbox": [ + 169, + 318, + 823, + 349 + ], + "page_idx": 17 + }, + { + "type": "equation", + "text": "\n$$\n\\delta = 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right) \\Longrightarrow t = \\sqrt {\\frac {1 8 \\log \\left(2 r ^ {2} / \\delta\\right)}{d _ {\\mathrm {i n}}}}. \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 362, + 825, + 402 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "With probability at least $1 - \\delta$ , we have $\\max_{k,l} |X_{kl}| \\leq t$ . This allows us to bound the Frobenius norm of $X$ :", + "bbox": [ + 169, + 414, + 826, + 444 + ], + "page_idx": 17 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\| X \\right\\| _ {F} ^ {2} = \\sum_ {k, l = 1} ^ {r} \\left| X _ {k l} \\right| ^ {2} \\leq r ^ {2} \\left(\\max _ {k, l} \\left| X _ {k l} \\right|\\right) ^ {2} \\leq r ^ {2} t ^ {2}. \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 339, + 455, + 825, + 491 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Thus, with probability at least $1 - \\delta$ :", + "bbox": [ + 171, + 503, + 437, + 520 + ], + "page_idx": 17 + }, + { + "type": "equation", + "text": "\n$$\n\\| X \\| _ {F} \\leq r \\cdot t = r \\sqrt {\\frac {1 8 \\log (2 r ^ {2} / \\delta)}{d _ {\\mathrm {i n}}}} = O \\left(r \\sqrt {\\frac {\\log r}{d _ {\\mathrm {i n}}}}\\right). \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 313, + 532, + 825, + 575 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Since $r \\ll d_{\\mathrm{in}}$ , the term $\\| X \\|_F \\to 0$ as $d_{\\mathrm{in}} \\to \\infty$ . This shows that $X$ converges to the zero matrix in probability.", + "bbox": [ + 169, + 588, + 823, + 618 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Finally, we bound the magnitude of the original inner product using the Cauchy-Schwarz inequality for the Frobenius inner product and the sub-multiplicative property of the Frobenius norm:", + "bbox": [ + 169, + 623, + 825, + 665 + ], + "page_idx": 17 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\left| \\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\right| = \\left| \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} X \\tilde {B} _ {t}\\right) \\right| = \\left| \\left\\langle \\tilde {B} _ {s}, X \\tilde {B} _ {t} \\right\\rangle_ {F} \\right| \\\\ \\leq \\left\\| \\tilde {B} _ {s} \\right\\| _ {F} \\| X \\tilde {B} _ {t} \\| _ {F} \\tag {15} \\\\ \\leq \\| \\tilde {B} _ {s} \\| _ {F} \\| X \\| _ {F} \\| \\tilde {B} _ {t} \\| _ {F}. \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 346, + 670, + 823, + 729 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "The norms $\\| \\tilde{B}_s\\| _F$ and $\\| \\tilde{B}_t\\| _F$ are finite, as determined by the trained adapters. Since we have shown that $\\| X\\| _F\\to 0$ in probability, the entire expression must also converge to 0 in probability.", + "bbox": [ + 169, + 741, + 826, + 785 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "D Hyperparameter Settings", + "text_level": 1, + "bbox": [ + 171, + 810, + 437, + 829 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "We summarize the hyperparameter settings used for LoRI in Tables 4, 5, 6, and 7. These include settings for different tasks (NLU, math, code, safety), adapter variants (LoRI-D, LoRI-S), base models (Llama-3-8B and Mistral-7B), and ranks (32 and 64).", + "bbox": [ + 169, + 845, + 823, + 888 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "For the merging experiments, the hyperparameter settings for merging four adapters are provided in Tables 8 and 9, while those for merging three adapters are provided in Table 10.", + "bbox": [ + 169, + 895, + 825, + 925 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/019a56ebb137460d7b3baa0a71dcef549140a94813ddb15f6ec50420b41375a0.jpg", + "table_caption": [ + "Table 5: Hyperparameter settings for LoRI on the math dataset GSM8K." + ], + "table_footnote": [], + "table_body": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α646412812864643264
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-35e-55e-41e-45e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs3
Whereq, k, v, o, gate, up, down
", + "bbox": [ + 176, + 128, + 823, + 292 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/83373ae876f24c07be676bc904237ed38a4c1ac6f91be338af486c4a228dd6ab.jpg", + "table_caption": [ + "Table 6: Hyperparameter settings for LoRI on the code dataset CodeAlpaca." + ], + "table_footnote": [], + "table_body": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs2
Whereq, k, v, o, gate, up, down
", + "bbox": [ + 176, + 332, + 823, + 496 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/f880602a047217b3862f3cabe79e6da7bcf3dc974df10a60d32fcc512581142f.jpg", + "table_caption": [ + "Table 7: Hyperparameter settings for LoRI on the safety dataset Saferpaca." + ], + "table_footnote": [], + "table_body": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
", + "bbox": [ + 176, + 535, + 823, + 698 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/c7ed5a53c1b7e2f2aed88b13b5470bca2d55f38fd8dc214c3eb9192c77c5cf11.jpg", + "table_caption": [ + "Table 8: Hyperparameter settings for merging four adapters using Llama-3-8B." + ], + "table_footnote": [], + "table_body": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
", + "bbox": [ + 176, + 737, + 823, + 806 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/bcecb925b480f17b7a5d22c03c23ec8dfce0886aed9b4aa7b0d70110ed4695d0.jpg", + "table_caption": [ + "Table 9: Hyperparameter settings for merging four adapters using Mistral-7B." + ], + "table_footnote": [], + "table_body": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelMistralMistralMistralMistralMistralMistralMistralMistralMistral
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
", + "bbox": [ + 176, + 845, + 823, + 917 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 18 + }, + { + "type": "table", + "img_path": "images/8e018c516640803a315887d386a51f0ed1a9aa1e20c0fafea96beb17d736aeb0.jpg", + "table_caption": [ + "Table 10: Hyperparameter settings for merging three adapters using Llama-3-8B." + ], + "table_footnote": [], + "table_body": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.50.50.50.50.50.50.50.40.4
Density--0.30.70.7----
", + "bbox": [ + 176, + 126, + 823, + 195 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/9c9dd3534fb8ab88ff1d79ab0f5a7a4b19e18e497f8aaf38ff907498b88bc0be.jpg", + "table_caption": [ + "Table 11: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
VeRA1.38M (0.02%)64.481.862.667.385.760.978.556.969.8
IA31.70M (0.02%)68.684.874.577.689.475.790.675.079.5
LoRA-FA44M (0.54%)74.089.683.383.893.488.696.187.487.0
AdaLoRA84M (1.03%)75.689.282.483.191.087.894.487.686.4
rsLoRA84M (1.03%)72.884.878.876.087.085.091.082.882.3
PiSSA84M (1.03%)68.184.478.275.185.182.889.382.880.7
LoRA+84M (1.03%)67.080.378.570.182.381.588.979.778.5
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
", + "bbox": [ + 176, + 272, + 823, + 428 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E Additional Experimental Results", + "text_level": 1, + "bbox": [ + 171, + 463, + 501, + 479 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.1 Comparison with Additional PEFT Methods", + "text_level": 1, + "bbox": [ + 171, + 502, + 542, + 517 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To provide a comprehensive benchmark, we evaluate LoRI against several widely adopted parameter-efficient fine-tuning (PEFT) methods, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), LoRA+ (Hayou et al., 2024), and DoRA (Liu et al., 2024). The results, presented in Tables 11 and 12, demonstrate that our proposed methods are highly effective.", + "bbox": [ + 169, + 534, + 823, + 619 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "LoRI-D, which uses 44M trainable parameters (0.54% of the full model and half of LoRA's), consistently achieves state-of-the-art performance, particularly on NLU and code generation benchmarks. LoRI-S, despite its aggressive sparsity (0.05% of the full model and 5% of LoRA's), remains highly competitive and often surpasses other PEFT methods. While VeRA and IA3 are more parameter-efficient, their performance is substantially lower than LoRI-S. Despite this efficiency, LoRI-D and LoRI-S deliver comparable – and often superior – performance across NLU, math, code, and safety domains. These results underscore two key insights: (1) effective adaptation does not require updating the projection matrices $A$ , as demonstrated by LoRI-D; and (2) the matrices $B$ contains significant redundancy that can be effectively pruned, as shown by LoRI-S.", + "bbox": [ + 169, + 625, + 826, + 765 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.2 Results with Rank $r = 64$", + "text_level": 1, + "bbox": [ + 171, + 794, + 403, + 808 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "We evaluate several adaptation methods using a higher adapter rank of $r = 64$ across a diverse set of tasks. This allows for more expressive adapter representations while still maintaining efficiency compared to full fine-tuning. Table 13 presents performance on eight natural language understanding (NLU) benchmarks, while Table 14 includes results on GSM8K (math), HumanEval (code), and HEx-PHI (safety). Across Llama-3 and Mistral models, LoRI-D and LoRI-S consistently perform competitively, often outperforming larger adapter methods like LoRA and DoRA, while using fewer parameters.", + "bbox": [ + 169, + 825, + 823, + 925 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/374ef3c8e56f616defa3b1ca41b03317a863716e9f592e6052743ef31155dda5.jpg", + "table_caption": [ + "Table 12: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEX-PHI (safety) benchmarks using Llama-3 with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
VeRA1.38M (0.02%)30.632.445.150.974.7
IA31.70M (0.02%)48.032.745.651.585.4
LoRA-FA44M (0.54%)64.842.957.564.294.1
AdaLoRA84M (1.03%)63.333.545.049.491.9
rsLoRA84M (1.03%)61.328.435.538.398.1
PiSSA84M (1.03%)61.332.040.343.397.8
LoRA+84M (1.03%)61.733.042.746.098.8
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
", + "bbox": [ + 223, + 167, + 777, + 358 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/51115363e44e71788c130af3a59a40679a0f7fc02dfb30aac2bca32a1d13f5b2.jpg", + "table_caption": [ + "Table 13: Performance comparison of different adaptation methods on eight natural language understanding (NLU) benchmarks using Llama-3 and Mistral with $r = 64$ . **Bold indicates the best-performing method, and underline indicates the second-best.**" + ], + "table_footnote": [], + "table_body": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA168M (2.05%)75.289.081.282.392.489.195.388.286.6
DoRA169M (2.06%)76.489.082.082.692.387.595.187.386.5
LoRI-D88M (1.07%)75.890.482.783.392.688.695.987.487.1
LoRI-S8.8M (0.11%)76.590.281.983.593.887.596.287.287.1
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA168M (2.26%)77.490.283.584.093.089.395.689.487.8
DoRA169M (2.28%)76.090.683.583.392.889.695.787.687.4
LoRI-D88M (1.18%)75.990.783.782.092.190.096.487.887.3
LoRI-S8.8M (0.12%)74.290.783.583.092.689.595.889.587.3
", + "bbox": [ + 173, + 445, + 823, + 612 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/04ba35cb4761e76d3a6c939ca6c0974b80c68b7418d58fb1f2388d3f92ce31bd.jpg", + "table_caption": [ + "Table 14: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with $r = 64$ . **Bold indicates the best-performing method, and **underline indicates the second-best.**" + ], + "table_footnote": [], + "table_body": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA168M (2.05%)63.938.652.959.294.1
DoRA169M (2.06%)63.839.453.659.793.4
LoRI-D88M (1.07%)63.841.955.460.396.6
LoRI-S8.8M (0.11%)61.844.157.462.496.3
Mistral-7B
FFT7.24G (100%)55.530.539.341.794.1
LoRA168M (2.26%)56.733.943.146.995.9
DoRA169M (2.28%)57.832.943.347.296.6
LoRI-D88M (1.18%)58.233.343.647.390.9
LoRI-S8.8M (0.12%)58.432.142.246.393.4
", + "bbox": [ + 233, + 712, + 769, + 909 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/5b275f33c278c822894b05d2926f30adb0610e3978ae0b990b1e4ca4dbdb6824.jpg", + "table_caption": [ + "Table 15: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
ConcatLoRA82.552.432.340.844.175.6
LinearLoRA81.448.033.141.643.976.6
MagnitudeLoRA77.542.732.741.845.680.9
TIESLoRA31.323.532.040.243.581.9
DARELoRA76.143.032.041.044.683.4
ConcatLoRI-D79.352.434.442.845.583.8
LinearLoRI-D78.150.535.242.745.579.7
ConcatLoRI-S79.246.133.341.645.979.4
LinearLoRI-S75.540.328.836.039.683.1
", + "bbox": [ + 202, + 181, + 797, + 353 + ], + "page_idx": 21 + }, + { + "type": "table", + "img_path": "images/5a39e390791af67f5fee2c41cc9b9bf7cf985c272f014076e5e6ca15c1a4a159.jpg", + "table_caption": [ + "Table 16: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA73.989.181.181.492.483.094.484.585.0
LinearLoRA73.788.881.180.791.684.493.984.184.8
MagnitudeLoRA72.087.176.879.491.781.590.476.481.9
TIESLoRA68.283.867.369.587.869.273.361.472.6
DARELoRA70.785.074.177.590.776.686.871.079.1
ConcatLoRI-D74.087.777.881.092.481.092.778.983.2
LinearLoRI-D73.787.776.780.392.180.192.077.782.5
ConcatLoRI-S71.886.276.179.291.578.689.876.381.2
LinearLoRI-S70.785.375.178.090.875.086.571.379.1
", + "bbox": [ + 176, + 433, + 823, + 575 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "E.3 Merging Four Adapters", + "text_level": 1, + "bbox": [ + 171, + 599, + 390, + 616 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "To support multi-task learning within a unified model, we study the merging of four task-specific adapters using various strategies. Table 15 reports results using Mistral-7B across a range of tasks. Additionally, Tables 16 and 17 break down the performance of NLU on individual benchmarks using Llama-3 and Mistral, respectively. We compare merging methods such as concatenated merging, linear merging, magnitude pruning, TIES, and DARE. LoRI-based approaches demonstrate strong performance and stability when merging multiple adapters.", + "bbox": [ + 169, + 625, + 823, + 724 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "E.4 Merging Three Adapters", + "text_level": 1, + "bbox": [ + 171, + 741, + 398, + 758 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "We further evaluate the merging of three adapters to understand performance when adapting to a smaller set of tasks. Tables 18 and 19 summarize the results for Llama-3 across different benchmarks. Similar to the four-task setting, LoRI-D remains a strong performer, often exceeding the performance of LoRA. These results highlight that LoRI-based methods are effective with varying levels of task diversity.", + "bbox": [ + 169, + 767, + 823, + 839 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "E.5 Pruning-Based Merging Methods", + "text_level": 1, + "bbox": [ + 171, + 854, + 465, + 871 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Finally, we explore pruning-based merging methods, which aim to compress and combine multiple adapters by selectively retaining important weights. We focus on three methods: magnitude pruning, TIES, and DARE. Results are reported for merging both four-adapter", + "bbox": [ + 169, + 881, + 823, + 926 + ], + "page_idx": 21 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 21 + }, + { + "type": "table", + "img_path": "images/2782a629cbad07f42a8fe9ab46b398148c7fe5252ab8de9463f9161a6a55fdc6.jpg", + "table_caption": [ + "Table 17: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D75.990.683.083.691.988.495.987.487.1
ConcatLoRA69.088.078.179.990.984.292.477.882.5
LinearLoRA69.286.977.978.590.282.191.575.181.4
MagnitudeLoRA68.784.974.475.989.177.585.664.177.5
TIESLoRA18.469.840.714.021.920.114.650.931.3
DARELoRA69.484.373.174.288.974.382.661.876.1
ConcatLoRI-D68.485.975.676.689.481.385.971.179.3
LinearLoRI-D66.386.074.975.388.980.885.068.078.1
ConcatLoRI-S72.685.474.676.589.780.186.068.979.2
LinearLoRI-S67.683.872.073.088.374.680.964.375.5
", + "bbox": [ + 176, + 191, + 823, + 333 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/df2db27ced015225db70179c581e419f0d47043e07a3ed6e710165c4c3fddaa2.jpg", + "table_caption": [ + "Table 18: Comparison of merging methods for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
ConcatLoRA86.454.513.019.821.8
LinearLoRA86.151.98.814.516.7
MagnitudeLoRA83.852.023.337.443.0
TIESLoRA79.426.936.348.753.7
DARELoRA81.153.336.049.553.9
ConcatLoRI-D84.859.641.556.461.6
LinearLoRI-D84.657.638.351.656.8
ConcatLoRI-S83.351.831.244.649.8
LinearLoRI-S81.041.726.640.044.6
", + "bbox": [ + 241, + 465, + 759, + 638 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/5d742e4240f4550b50f8a045f6eade200f56a5c2028c9f9964e737210a4a0f04.jpg", + "table_caption": [ + "Table 19: Comparison of merging methods for combining three adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA74.789.681.882.993.786.295.886.886.4
LinearLoRA73.989.681.481.993.585.595.687.186.1
MagnitudeLoRA72.287.278.981.292.283.293.082.483.8
TIESLoRA69.584.874.078.491.277.488.871.479.4
DARELoRA71.085.675.879.591.078.890.776.281.1
ConcatLoRI-D73.889.079.881.093.083.094.684.084.8
LinearLoRI-D74.188.480.281.392.982.194.183.684.6
ConcatLoRI-S70.387.279.180.892.482.193.281.383.3
LinearLoRI-S61.586.478.079.591.780.891.378.581.0
", + "bbox": [ + 176, + 757, + 823, + 898 + ], + "page_idx": 22 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/3e3cf304781f00eeed6139ed70a45546dc84631bdaf0303343be9afe0bde0460.jpg", + "table_caption": [ + "Table 20: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method within each group." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM&KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
MagnitudeLoRA81.950.324.136.742.474.4
MagnitudeLoRI-D84.350.533.345.251.485.9
MagnitudeLoRI-S76.435.225.236.541.068.4
TIESLoRA72.624.032.546.351.777.8
TIESLoRI-D79.138.040.354.659.885.3
TIESLoRI-S70.425.934.648.453.277.8
DARELoRA79.148.934.148.753.574.1
DARELoRI-D83.452.035.451.357.881.9
DARELoRI-S73.427.234.848.153.575.3
", + "bbox": [ + 202, + 181, + 799, + 364 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/d070a3c1b9f7ec4b03797f93dae14ef188a5af61d1ac4e5037057f00332a5fe2.jpg", + "table_caption": [ + "Table 21: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method within each group." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
MagnitudeLoRA77.542.732.741.845.680.9
MagnitudeLoRI-D76.041.529.036.038.779.4
MagnitudeLoRI-S70.532.428.136.139.377.5
TIESLoRA31.323.532.040.243.581.9
TIESLoRI-D65.045.435.344.547.868.4
TIESLoRI-S67.832.928.637.240.878.4
DARELoRA76.143.032.041.044.683.4
DARELoRI-D76.242.329.237.140.789.1
DARELoRI-S71.934.329.240.544.985.0
", + "bbox": [ + 202, + 459, + 799, + 641 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "(Tables 20 and 21) and three-adapter (Table 22) settings, using Llama-3 and Mistral as base models. LoRI-D consistently achieves strong performance across all pruning-based merging methods. However, the performance of LoRI-S is somewhat lower in these settings. This is because pruning-based methods operate on the dense $A$ matrices but not on the sparse $B$ matrices. This mismatch leads to an inconsistent pruning scheme, which can result in a loss of effectiveness.", + "bbox": [ + 169, + 665, + 826, + 750 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "F Additional Ablation Studies", + "text_level": 1, + "bbox": [ + 171, + 770, + 464, + 787 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Figure 5 presents GSM8K accuracy across a grid of sparsity ratios and learning rates using Mistral-7B with rank $r = 64$ . We observe that sparse adapters require larger learning rates to train effectively. In particular, models with high sparsity (e.g., above $70\\%$ ) perform best with a learning rate of $10^{-4}$ or higher. This suggests that stronger optimization is necessary to compensate for limited capacity in sparse adapters.", + "bbox": [ + 169, + 801, + 826, + 876 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "In Figure 6, we analyze how sparsity is distributed across layers and projections when enforcing $90\\%$ global sparsity on GSM8K. We find that feedforward (FFN) projections tend to retain more parameters – i.e., they exhibit lower sparsity – than self-attention projections.", + "bbox": [ + 169, + 881, + 825, + 926 + ], + "page_idx": 23 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 946, + 509, + 960 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/76404041c0f0201eb41da8a2571b926d7fc6c696f21dd849b8ed8f5ef3dab48a.jpg", + "table_caption": [ + "Table 22: Comparison of magnitude pruning, TIES, and DARE for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method within each group." + ], + "table_footnote": [], + "table_body": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
MagnitudeLoRA83.852.023.337.443.0
MagnitudeLoRI-D84.653.734.848.954.7
MagnitudeLoRI-S77.836.625.538.843.8
TIESLoRA79.426.936.348.753.7
TIESLoRI-D82.142.239.252.757.7
TIESLoRI-S73.835.234.847.952.5
DARELoRA81.153.336.049.553.9
DARELoRI-D84.055.233.845.851.8
DARELoRI-S75.336.636.248.953.4
", + "bbox": [ + 241, + 181, + 759, + 364 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg", + "image_caption": [ + "Figure 5: GSM8K accuracy under different sparsity ratios and learning rates. Base model: Mistral-7B, rank $r = 64$ ." + ], + "image_footnote": [], + "bbox": [ + 259, + 381, + 718, + 599 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "This indicates that FFN components are more critical for effective adaptation. Additionally, sparsity decreases toward the top of the network, suggesting that higher layers are more important for task-specific specialization.", + "bbox": [ + 169, + 669, + 823, + 714 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Lastly, Figure 7 explores the effect of merging weights when combining three LoRI-S adapters using concatenated and linear merging. We find a noticeable trade-off between performance on code tasks and other domains (e.g., NLU and math). Higher merging weights can improve NLU performance but tend to degrade performance on code, highlighting the challenge of balancing generalization and specialization in multi-task settings.", + "bbox": [ + 169, + 718, + 826, + 791 + ], + "page_idx": 24 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg", + "image_caption": [ + "Figure 6: Sparsity ratios across layers and projections under a $90\\%$ sparsity on GSM8K. Base model: Llama-3-8B, rank $r = 32$ ." + ], + "image_footnote": [], + "bbox": [ + 338, + 181, + 661, + 429 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg", + "image_caption": [ + "(a) Concatnated merging with LoRI-S." + ], + "image_footnote": [], + "bbox": [ + 187, + 638, + 493, + 779 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg", + "image_caption": [ + "(b) Linear merging with LoRI-S.", + "Figure 7: Ablation study on the effect of merging weights when combining three adapters. Base model: Llama-3-8B, rank $r = 32$ ." + ], + "image_footnote": [], + "bbox": [ + 503, + 638, + 810, + 779 + ], + "page_idx": 25 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 25 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_model.json b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..dc2f7774cd9ac9ebbe5a39165c40d8852cabe781 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_model.json @@ -0,0 +1,3728 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.099, + 0.825, + 0.141 + ], + "angle": 0, + "content": "LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.166, + 0.751, + 0.183 + ], + "angle": 0, + "content": "Juzheng Zhang\\(^{1}\\), Jiacheng You\\(^{2}\\), Ashwinee Panda\\(^{1}\\), Tom Goldstein\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.183, + 0.543, + 0.199 + ], + "angle": 0, + "content": "\\(^{1}\\)University of Maryland \\(^{2}\\)Tsinghua University" + }, + { + "type": "title", + "bbox": [ + 0.459, + 0.234, + 0.54, + 0.251 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.266, + 0.768, + 0.502 + ], + "angle": 0, + "content": "Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices \\(A\\) as random projections and sparsifies the matrices \\(B\\) using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to \\(95\\%\\) fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.529, + 0.32, + 0.545 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.56, + 0.827, + 0.716 + ], + "angle": 0, + "content": "Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2023) have transformed deep learning, showcasing remarkable capabilities across various domains. However, their deployment remains computationally demanding, particularly when fine-tuning is required to adapt to downstream tasks or align with human preferences. To mitigate the high resource costs, researchers have developed a range of parameter-efficient fine-tuning (PEFT) techniques. Among these techniques, LoRA (Hu et al., 2021) has gained widespread adoption due to its compelling balance of performance and efficiency. Nevertheless, LoRA still introduces notable memory overhead, particularly in large-scale models. Consequently, recent research has focused on further optimizing LoRA by reducing the number of trainable parameters without compromising performance (Kopiczko et al., 2023; Ding et al., 2023; Zhang et al., 2023b)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.72, + 0.827, + 0.904 + ], + "angle": 0, + "content": "Recent studies (Yu et al., 2024; Panda et al., 2024) have shown that delta parameters – the differences between fine-tuned and pretrained model weights – exhibit significant redundancy. Furthermore, previous works (Zhang et al., 2023b; Zhu et al., 2024) have observed that freezing matrices \\( A \\) in LoRA often achieves comparable performance to training them. Motivated by these findings, we propose LoRA with Reduced Interference (LoRI). LoRI keeps matrices \\( A \\) fixed as random projections, while training matrices \\( B \\) using task-specific sparse masks. To retain the most critical elements of \\( B \\), LoRI performs a calibration process to extract sparse masks by selecting the highest-magnitude elements across all layers and projections. As shown in Figure 1(a), LoRI maintains performance even with \\( 90\\% \\) sparsity in \\( B \\) while keeping \\( A \\) frozen. This demonstrates that adaptation does not require updating \\( A \\), and that \\( B \\) has considerable redundancy. By applying more constrained updates than LoRA, LoRI significantly reduces the number of trainable parameters while better preserving the pretrained model's knowledge during adaptation." + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.282, + 0.061, + 0.717 + ], + "angle": 270, + "content": "arXiv:2504.07448v2 [cs.LG] 2 Aug 2025" + }, + { + "type": "page_footnote", + "bbox": [ + 0.198, + 0.911, + 0.448, + 0.925 + ], + "angle": 0, + "content": "Correspondence to: juzheng@umd.edu." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.189, + 0.105, + 0.818, + 0.252 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.261, + 0.825, + 0.351 + ], + "angle": 0, + "content": "Figure 1: (a) Varying sparsity ratios in matrices \\( B \\) while freezing \\( A \\). Performance remains stable even at \\( 90\\% \\) sparsity in matrices \\( B \\). (b) Merging three adapters via weighted averaging. LoRA suffers degradation due to parameter interference, while LoRI preserves task performance. (c) Continual learning from Safety to NLU. LoRA suffers from catastrophic forgetting, while LoRI retains safety alignment. Results for NLU are averaged over eight tasks. GSM8K accuracy (Math), HumanEval pass@10 (Code), and HEx-PHI refusal rate (Safety) are reported individually. Base model: Llama-3-8B, rank \\( r = 32 \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.368, + 0.827, + 0.579 + ], + "angle": 0, + "content": "Multi-task learning is essential for enabling versatile models with multi-task capabilities, which is traditionally performed via joint training on a combination of task-specific datasets (Caruana, 1997; Sener & Koltun, 2018). However, training large models on this data mixture is prohibitively expensive in terms of time and compute. Model merging is a training-free alternative for building powerful models by combining existing ones (Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). This approach is well-suited for merging LoRA adapters, enabling multi-task capabilities within a single model during inference (Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024). However, as shown in Figure 1(b), directly merging heterogeneous LoRAs often results in parameter interference, leading to degraded performance compared to single-task LoRAs. Additionally, many existing merging methods require trial-and-error to identify the optimal method for a specific combination of tasks. LoRI addresses these challenges by using fixed, randomly initialized projection \\( A \\), which maps task-specific adapters into approximately orthogonal subspaces. This reduces interference when merging multiple adapters. In addition, LoRI enables adapter merging without manual selection of merging methods." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.584, + 0.825, + 0.751 + ], + "angle": 0, + "content": "Beyond multi-tasking, safety-critical scenarios require that each newly introduced adapter enhances model capabilities while preserving the safety alignment of the pretrained base model (Qi et al., 2023). LoRI provides a lightweight continual learning approach for adapting models while preserving safety, where training is performed sequentially across tasks (Lopez-Paz & Ranzato, 2017; Wu et al., 2022; Ouyang et al., 2022). The strategy involves first fine-tuning an adapter on safety data to establish alignment, followed by separate adaptation to each downstream task. However, as illustrated in Figure 1(c), continual learning often leads to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), wherein the adaptation to new tasks substantially compromises previously acquired knowledge. LoRI mitigates forgetting by leveraging the sparsity of projection \\( B \\) through task-specific masks. This isolation of parameter updates across tasks facilitates continual learning with minimal interference, preserving both safety and task effectiveness." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.758, + 0.827, + 0.926 + ], + "angle": 0, + "content": "To evaluate the effectiveness of LoRI, we conduct extensive experiments across a diverse suite of benchmarks spanning natural language understanding (NLU), mathematical reasoning, code generation, and safety alignment tasks. Using Llama-3-8B and Mistral-7B as base models, our results show that LoRI achieves performance comparable to - or better than - full fine-tuning (FFT), LoRA, and other PEFT methods, while using up to \\(95\\%\\) fewer trainable parameters than LoRA. Notably, LoRI with \\(90\\%\\) sparsity in \\(B\\) surpasses LoRA by \\(17.3\\%\\) on HumanEval with Llama-3. Beyond single-task adaptation, we evaluate LoRI in multi-task settings, including adapter merging and continual learning scenarios. Concatenated merging of LoRI adapters consistently outperforms LoRA adapters overall, closely matching the performance of single-task LoRA baseline. In continual learning, LoRI significantly outperforms LoRA in mitigating catastrophic forgetting of safety alignment, while maintaining strong performance on downstream tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.182, + 0.103, + 0.382, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.225, + 0.232, + 0.338, + 0.246 + ], + "angle": 0, + "content": "(a) LoRI method." + }, + { + "type": "image", + "bbox": [ + 0.396, + 0.102, + 0.6, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.44, + 0.231, + 0.558, + 0.247 + ], + "angle": 0, + "content": "(b) LoRI merging." + }, + { + "type": "image", + "bbox": [ + 0.608, + 0.102, + 0.825, + 0.227 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.626, + 0.231, + 0.808, + 0.247 + ], + "angle": 0, + "content": "(c) LoRI continual learning." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.256, + 0.825, + 0.31 + ], + "angle": 0, + "content": "Figure 2: Overview of the proposed LoRI method. (a) LoRI freezes the projection matrices \\(A_{t}\\) and sparsely updates \\(B_{t}\\) using task-specific masks \\(M_{t}\\). (b) LoRI enables adapter merging of multiple task-specific adapters with reduced parameter interference. (c) LoRI builds safety adapters by continual learning with reduced catastrophic forgetting." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.335, + 0.279, + 0.351 + ], + "angle": 0, + "content": "2 Method" + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.369, + 0.611, + 0.385 + ], + "angle": 0, + "content": "2.1 Freezing Low-Rank Projections with Sparse Masking" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.395, + 0.825, + 0.456 + ], + "angle": 0, + "content": "Freezing Projection \\(A\\). LoRA (Hu et al., 2021) fine-tunes a weight update matrix as a product of two low-rank matrices to adapt LLMs to new tasks. Formally, for a specific task \\(t\\), given a pretrained weight matrix \\(W_0 \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}\\), the weight update \\(\\Delta_t \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}\\) is constrained to a low-rank decomposition:" + }, + { + "type": "equation", + "bbox": [ + 0.381, + 0.463, + 0.825, + 0.48 + ], + "angle": 0, + "content": "\\[\nh = x W _ {0} + x \\Delta_ {t} = x W _ {0} + x A _ {t} B _ {t}. \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.489, + 0.825, + 0.534 + ], + "angle": 0, + "content": "where \\(A_{t} \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times r}\\), \\(B_{t} \\in \\mathbb{R}^{r \\times d_{\\mathrm{out}}}\\), and \\(r \\ll \\min\\{d_{\\mathrm{in}}, d_{\\mathrm{out}}\\}\\). We denote \\(\\Delta_t\\) as the LoRA adapter for task \\(t\\). In practice, LoRA adapters are typically applied to multiple projection matrices (e.g., \\(W_q, W_v\\)) within each transformer layer." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.539, + 0.827, + 0.653 + ], + "angle": 0, + "content": "Typically, the low-rank projection matrices \\( A_{t} \\) and the low-rank expansion matrices \\( B_{t} \\) are updated via gradient descent. Matrices \\( A_{t} \\) are usually initialized with Kaiming Uniform distribution (He et al., 2015), while matrices \\( B_{t} \\) are initialized to zero, ensuring that \\( \\Delta_{t} = 0 \\) at the start of training. However, in LoRI, we fix \\( A_{t} \\) as random projections, meaning that the model only learns how to combine the fixed subspace via \\( B_{t} \\). By freezing \\( A_{t} \\), we eliminate the need to store their gradients and optimizer states, thereby reducing memory consumption. During inference, similar to LoRA, LoRI merges the low-rank updates by adding \\( A_{t}B_{t} \\) to \\( W_{0} \\), ensuring no additional inference latency compared to full fine-tuning." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.668, + 0.827, + 0.797 + ], + "angle": 0, + "content": "Sparse Masking for Projection \\( B \\). LoRI freezes matrices \\( A_{t} \\) and selectively updates only the most relevant parameters in \\( B_{t} \\) for each task, as illustrated in Figure 2(a). For task \\( t \\), it first extracts sparse masks \\( M_{t} \\) through a calibration process, then applies the masks to constrain training to a limited subset of parameters in \\( B_{t} \\). During mask calibration, LoRI updates \\( B_{t} \\) without masking using a calibration dataset \\( \\mathcal{D}_t^C \\), sampled from the adaptation dataset \\( \\mathcal{D}_t \\). After this phase, LoRI collects all \\( B_{t} \\) matrices from the model across layers and projections. Then it computes a global threshold \\( \\tau_t \\), defined as the \\( s\\% \\) quantile of the absolute values of all elements from these matrices, where \\( s \\) is the sparsity ratio. For each matrix \\( B_{t} \\), the corresponding sparse mask \\( M_{t} \\) is computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.297, + 0.806, + 0.825, + 0.832 + ], + "angle": 0, + "content": "\\[\nM _ {t} = \\mathbb {I} \\left(\\left| B _ {t} \\right| \\geq \\tau_ {t}\\right), \\quad \\text {w h e r e} \\quad \\tau_ {t} = \\operatorname {Q u a n t i l e} _ {s} \\left(\\bigcup \\left| B _ {t} \\right|\\right). \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.84, + 0.827, + 0.926 + ], + "angle": 0, + "content": "Here, \\(\\mathbb{I}(\\cdot)\\) denotes the indicator function applied element-wise. This ensures that only the top- \\((1 - s)\\%\\) of parameters (by magnitude) across all layers and projections are retained. The masks can also be derived using gradient-based measures such as the Fisher information matrix (Guo et al., 2023; Iurada et al., 2025) or SNIP score (Lee et al., 2018). However, these methods capture local sensitivity at a specific training step, whereas magnitude reflects cumulative importance over the entire fine-tuning process." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.825, + 0.175 + ], + "angle": 0, + "content": "It is well established that the importance of projection matrices varies significantly across different layers and projections (Zhang et al., 2023a;d; Kopiczko et al., 2023). Our masking strategy enables global comparison of parameters and facilitates effective allocation of the parameter budget determined by the sparsity ratio. Notably, the masks for each task \\( t \\) are calibrated only once and can be reused as needed." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.181, + 0.827, + 0.309 + ], + "angle": 0, + "content": "After mask calibration, LoRI resets \\( B_{t} \\) to zero and trains on the adaptation dataset \\( \\mathcal{D}_t \\), with updates restricted to the masked parameters. The LoRI adapter is expressed as \\( \\Delta_t = A_t(B_t \\odot M_t) \\). The algorithm of LoRI is detailed in Appendix B. In practice, the sparsity ratio \\( s \\) can reach up to 90%, meaning that only a small fraction of parameters in matrices \\( B_{t} \\) are updated, while the majority remain unchanged. This selective adaptation enables the model to focus on modifying the most critical parameters needed for specific tasks, while preserving the foundational knowledge encoded in the pretrained base model. In the limiting case of a single task and zero sparsity, our method reduces to LoRA-FA (Zhang et al., 2023b), which has been shown to perform competitively with standard LoRA." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.324, + 0.669, + 0.34 + ], + "angle": 0, + "content": "2.2 Reducing Interference in Adapter Merging via Orthogonality" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.35, + 0.825, + 0.435 + ], + "angle": 0, + "content": "Orthogonality of LoRI Adapters. A central challenge in adapter merging is parameter interference, where combining multiple adapters leads to degraded performance due to conflicting parameter updates. Given a set of trained LoRI adapters \\(\\{\\Delta_1,\\Delta_2,\\dots ,\\Delta_T\\}\\), the goal is to construct a unified model that combines knowledge from all tasks with minimal interference, as illustrated in Figure 2(b). Formally, we define the excess loss due to parameter interference for a specific task \\(t\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.371, + 0.439, + 0.825, + 0.457 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {t} = \\mathcal {L} _ {t} \\left(W _ {\\text {m e r g e}}\\right) - \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.46, + 0.825, + 0.504 + ], + "angle": 0, + "content": "where \\( W_{\\mathrm{merge}} \\) is the merged model, \\( W_0 \\) is the pretrained weight matrix, \\( \\Delta_t \\) is the LoRI adapter for task \\( t \\), \\( \\alpha_t \\in \\mathbb{R} \\) is a scalar weight, and \\( \\mathcal{L}_t \\) is the loss function for task \\( t \\). A high \\( \\mathcal{L}_t \\) indicates significant interference." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.509, + 0.825, + 0.553 + ], + "angle": 0, + "content": "LoRI mitigates this interference by leveraging approximate orthogonality, achieved by freezing the projection matrices \\( A_{t} \\) as independent random matrices. This design leads to the following property, whose proof is provided in Appendix C:" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.558, + 0.825, + 0.629 + ], + "angle": 0, + "content": "Property 1. Let \\( A_s, A_t \\in \\mathbb{R}^{d_{in} \\times r} \\) be independent random matrices with i.i.d. entries drawn from a Kaiming Uniform distribution for distinct tasks \\( s \\neq t \\). Let their corresponding LoRI adapters be \\( \\Delta_s = A_s(B_s \\odot M_s) \\) and \\( \\Delta_t = A_t(B_t \\odot M_t) \\), where the trained matrices \\( (B_s \\odot M_s) \\) and \\( (B_t \\odot M_t) \\) have finite Frobenius norms. Under the condition that \\( r \\ll d_{in} \\), as the input dimension \\( d_{in} \\to \\infty \\), the adapters are approximately orthogonal:" + }, + { + "type": "equation", + "bbox": [ + 0.394, + 0.633, + 0.825, + 0.651 + ], + "angle": 0, + "content": "\\[\n\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\rightarrow 0 \\quad i n p r o b a b i l i t y. \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.662, + 0.825, + 0.707 + ], + "angle": 0, + "content": "We describe two merging methods: concatenated merging (weighted averaging) and linear merging (Task Arithmetic) (Ilharco et al., 2022), both of which exploit the approximate orthogonality of LoRIs." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.72, + 0.825, + 0.763 + ], + "angle": 0, + "content": "Concatenated Merging (Weighted Averaging). This method constructs the merged model by creating a weighted sum of individual task adapters. This is achieved by concatenating the weighted \\(A\\) and masked \\(B\\) matrices:" + }, + { + "type": "equation", + "bbox": [ + 0.242, + 0.769, + 0.825, + 0.799 + ], + "angle": 0, + "content": "\\[\nA ^ {\\prime} = \\left[ \\alpha_ {1} A _ {1} \\alpha_ {2} A _ {2} \\dots \\alpha_ {T} A _ {T} \\right], \\quad B ^ {\\prime} = \\left[ \\left(B _ {1} \\odot M _ {1}\\right) ^ {\\top}, \\dots , \\left(B _ {T} \\odot M _ {T}\\right) ^ {\\top} \\right] ^ {\\top}, \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.803, + 0.825, + 0.833 + ], + "angle": 0, + "content": "where \\(\\alpha_{t} \\in \\mathbb{R}\\) are scalar weights (e.g., uniform or task-prioritized). The final merged model is then formed by adding their product to the base model weights:" + }, + { + "type": "equation", + "bbox": [ + 0.264, + 0.838, + 0.825, + 0.877 + ], + "angle": 0, + "content": "\\[\nW _ {\\text {m e r g e}} = W _ {0} + A ^ {\\prime} B ^ {\\prime} = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t} \\left(B _ {t} \\odot M _ {t}\\right) = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} \\Delta_ {t}. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.882, + 0.825, + 0.926 + ], + "angle": 0, + "content": "By summing approximately orthogonal adapters, we ensure that the updates for each task occupy largely disjoint subspaces, thereby reducing interference (Ilharco et al., 2022; OrtizJimenez et al., 2023; Xiong et al., 2024)." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.204 + ], + "angle": 0, + "content": "The reduction in interference can be explained by a theoretical sketch based on two key assumptions. The first is the local linearity of the loss landscape (Li et al., 2018), which allows for a first-order Taylor approximation. The second is the gradient alignment assumption, formally expressed as \\(\\nabla \\mathcal{L}_t(W_0 + \\alpha_t\\Delta_t)\\propto \\Delta_t\\). This posits that at a task's solution, the direction of steepest descent is primarily aligned with the adapter updates already made for that task. Under these assumptions, the excess loss \\(\\mathcal{I}_t\\) is approximately the inner product of the gradient and the updates from the other tasks:" + }, + { + "type": "equation", + "bbox": [ + 0.303, + 0.211, + 0.825, + 0.254 + ], + "angle": 0, + "content": "\\[\n\\mathcal {I} _ {t} \\approx \\left\\langle \\nabla \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\sum_ {s \\neq t} \\alpha_ {s} \\Delta_ {s} \\right\\rangle_ {F} \\propto \\sum_ {s \\neq t} \\alpha_ {k} \\left\\langle \\Delta_ {t}, \\Delta_ {s} \\right\\rangle_ {F}. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.26, + 0.825, + 0.306 + ], + "angle": 0, + "content": "Since Property 1 establishes that \\(\\langle \\Delta_t, \\Delta_s \\rangle_F \\to 0\\) for \\(s \\neq t\\), the total interference loss becomes negligible: \\(\\mathcal{I}_t \\approx 0\\). This heuristic argument provides strong intuition for why concatenated merging is effective, which is then validated by our empirical results." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.32, + 0.825, + 0.35 + ], + "angle": 0, + "content": "Linear Merging (Task Arithmetic). Alternatively, the merged model can be formed by summing the \\( A_{t} \\) and masked \\( B_{t} \\) matrices independently before multiplication:" + }, + { + "type": "equation", + "bbox": [ + 0.21, + 0.356, + 0.825, + 0.398 + ], + "angle": 0, + "content": "\\[\nW _ {\\text {m e r g e}} = W _ {0} + \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t}\\right) \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} \\left(B _ {t} \\odot M _ {t}\\right)\\right) = W _ {0} + \\sum_ {s = 1} ^ {T} \\sum_ {t = 1} ^ {T} \\alpha_ {s} \\alpha_ {t} A _ {s} \\left(B _ {t} \\odot M _ {t}\\right). \\tag {8}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.405, + 0.825, + 0.48 + ], + "angle": 0, + "content": "While concatenated merging directly sums approximately orthogonal adapters, this linear merging approach introduces problematic cross-terms \\(\\alpha_{s}\\alpha_{t}A_{s}(B_{t}\\odot M_{t})\\) for \\(s\\neq t\\). These terms cause interference because components like \\(\\{A_s(B_t\\odot M_t)\\}_{t = 1}^T\\) for a fixed \\(s\\) are generally not mutually orthogonal. As a result, concatenated merging offers a cleaner and empirically more effective strategy for combining LoRI adapters." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.495, + 0.643, + 0.511 + ], + "angle": 0, + "content": "2.3 Reducing Interference in Continual Learning via Sparsity" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.521, + 0.827, + 0.634 + ], + "angle": 0, + "content": "Safety-Preserving Adapters. For safety-critical applications, ensuring that new task adaptations do not compromise established safety behaviors is crucial. Therefore, each newly introduced adapter must preserve the base model's safety alignment. A straightforward approach to achieve this is to merge a safety LoRI adapter into the deployed model during every inference. However, as we will show in Section 3.4, this method may be insufficient for scenarios that demand strong safety guarantees. In such cases, as illustrated in Figure 2(c), a more reliable solution is to adopt a two-phase continual learning process for each LoRI adapter to reinforce safety:" + }, + { + "type": "text", + "bbox": [ + 0.192, + 0.644, + 0.825, + 0.679 + ], + "angle": 0, + "content": "1. Safety Alignment Phase: Train a LoRI adapter on a curated safety dataset \\( \\mathcal{D}_{\\text{safety}} \\), yielding \\( \\Delta_{\\text{safety}} = A(B_{\\text{safety}} \\odot M_{\\text{safety}}) \\)." + }, + { + "type": "text", + "bbox": [ + 0.191, + 0.682, + 0.825, + 0.729 + ], + "angle": 0, + "content": "2. Task Adaptation Phase: Fine-tune \\(\\Delta_{\\mathrm{safety}}\\) on each task adaptation dataset \\(D_t, t = 1, 2, \\ldots, T\\), reusing the calibrated task-specific masks \\(M_t\\), resulting in safety-preserving adapters \\(\\Delta_t = A(B_t \\odot M_t)\\)." + }, + { + "type": "list", + "bbox": [ + 0.191, + 0.644, + 0.825, + 0.729 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.738, + 0.827, + 0.868 + ], + "angle": 0, + "content": "This method does not require recalibrating masks for each task or performing multiple rounds of continual learning. Notably, we do not enforce non-overlapping masks \\( M_t \\cap M_{\\text{safety}} = \\emptyset \\). Enforcing such a constraint would require recalibrating masks after the safety alignment phase due to the reduced parameter space, and could potentially degrade performance on downstream tasks. The expected overlap between sparse masks with \\( 90\\% \\) sparsity is theoretically \\( 1\\% \\). Empirically, we find that this expectation holds: the average overlap between task-specific masks is indeed \\( \\sim 1\\% \\), without explicitly enforcing non-overlap. This slight overlap allows important parameters to be shared across tasks, potentially enabling positive knowledge transfer." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.882, + 0.825, + 0.927 + ], + "angle": 0, + "content": "Catastrophic Forgetting. Continual learning models are vulnerable to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), where updates for new tasks can overwrite and degrade previously learned knowledge. Despite the slight overlap between" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.505, + 0.96 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.149 + ], + "angle": 0, + "content": "task-specific masks, the sparsity in \\( B_{t} \\) induced by \\( M_{t} \\) enables LoRI to facilitate isolated parameter updates for safety alignment and task adaptation. As a result, LoRI minimizes cross-task interference and mitigates catastrophic forgetting in safety alignment." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.172, + 0.32, + 0.189 + ], + "angle": 0, + "content": "3 Experiments" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.207, + 0.363, + 0.223 + ], + "angle": 0, + "content": "3.1 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.235, + 0.827, + 0.43 + ], + "angle": 0, + "content": "Datasets. We conduct a series of experiments to evaluate LoRI's effectiveness on single-task and multi-task settings, including adapter merging and continual learning. We focus on four capabilities: (i) Natural Language Understanding (NLU): LoRI is trained on the aggregation of eight NLU datasets (Hu et al., 2023), including BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), ARC-Challenge (Clark et al., 2018), ARC-Easy (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), and Winogrande (Sakaguchi et al., 2021). We evaluate accuracy on the individual test split for each dataset. (ii) Mathematical Reasoning (Math): LoRI is trained on the GSM8K (Cobbe et al., 2021) training split and evaluated on the GSM8K test split. (iii) Code Generation (Code): LoRI is trained on CodeAlpaca (Chaudhary, 2023) and evaluated using pass@1, pass@5, and pass@10 on HumanEval (Chen et al., 2021). (iv) Safety Alignment (Safety): LoRI is trained on Saferpaca (Bianchi et al., 2023), which extends Alpaca-Cleaned (Taori et al., 2023) with 2,000 safety instructions. Safety performance is assessed by measuring the refusal rate on harmful queries from HEX-PHI (Qi et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.45, + 0.828, + 0.633 + ], + "angle": 0, + "content": "Baselines. In single-task experiments, we compare LoRI with full fine-tuning (FFT), LoRA (Hu et al., 2021), and DoRA (Liu et al., 2024). Results for additional PEFT baselines, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), and LoRA+ (Hayou et al., 2024), are available in Appendix E.1. In merging experiments, we compare LoRI merging with several LoRA merging methods, including concatenated merging, linear merging (Ilharco et al., 2022), magnitude pruning, TIES-Merging (Yadav et al., 2023), and DARE (Yu et al., 2024). Magnitude pruning, TIES, and DARE are pruning-based approaches that apply sparsification to the \\( A \\) and \\( B \\) matrices before merging, based on a specified density. Magnitude pruning removes low-magnitude parameters; TIES-Merging further merges weights with consistent signs; and DARE performs random pruning followed by rescaling. For fair comparison, all baseline results are reproduced using a consistent experimental setup." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.651, + 0.827, + 0.806 + ], + "angle": 0, + "content": "Implementation Details. We use Llama-3-8B (Grattafori et al., 2024) and Mistral7B (Jiang et al., 2023) as base models. We conduct all experiments on 8 NVIDIA A5000 GPUs. To explore the impact of sparsity, we provide two variants of LoRI: LoRI-D, which uses dense \\( B \\) matrices, and LoRI-S, which applies \\( 90\\% \\) sparsity to \\( B \\). Sparsity is implemented by masking the gradients of \\( B \\) during backpropagation. For optimal performance, we use the entire adaptation dataset as the calibration dataset for each task. Ablation results for calibration are presented in Section 3.5. For consistency, we use the same hyperparameters for PEFT baselines as for LoRI-D. For all adapter merging experiments, uniform weights \\( \\alpha_{t} \\) are employed across all adapters. The weights \\( \\alpha_{t} \\) are treated as hyperparameters, and their ablation study is detailed in Section 3.5. Detailed hyperparameter settings are provided in Appendix D." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.827, + 0.403, + 0.843 + ], + "angle": 0, + "content": "3.2 Single-Task Performance" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.854, + 0.825, + 0.927 + ], + "angle": 0, + "content": "Table 1 presents single-task performance on eight NLU benchmarks, while Table 2 reports single-task performance on the math, code, and safety benchmarks. Results for additional PEFT baselines are available in Appendix E.1. The rank for our experiments is set to \\( r = 32 \\). We observed stable performance across different ranks, with additional results for \\( r = 64 \\) provided in Appendix E.2." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.101, + 0.825, + 0.144 + ], + "angle": 0, + "content": "Table 1: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 and Mistral with \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.175, + 0.155, + 0.825, + 0.321 + ], + "angle": 0, + "content": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA84M (1.15%)75.290.182.982.992.088.795.188.186.9
DoRA85M (1.16%)75.890.482.983.392.690.696.387.987.5
LoRI-D44M (0.60%)75.990.683.083.691.988.495.987.487.1
LoRI-S4.4M (0.06%)74.090.182.682.691.590.895.587.586.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.355, + 0.825, + 0.4 + ], + "angle": 0, + "content": "Table 2: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with \\( r = 32 \\). Bold indicates the best-performing method, and underline indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.234, + 0.41, + 0.769, + 0.605 + ], + "angle": 0, + "content": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
Mistral-7B
FFT7.24G (100%)55.529.138.540.494.1
LoRA84M (1.15%)57.833.842.445.391.9
DoRA85M (1.16%)57.533.742.646.895.3
LoRI-D44M (0.60%)58.033.842.045.194.7
LoRI-S4.4M (0.06%)57.133.743.648.195.9
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.652, + 0.827, + 0.806 + ], + "angle": 0, + "content": "While full fine-tuning (FFT) updates all model parameters, LoRA and DoRA reduce the number of trainable parameters to approximately \\(1\\%\\). LoRI-D further reduces this to about \\(0.5\\%\\) by freezing matrices \\(A\\), and LoRI-S pushes this reduction to \\(0.05\\%\\) by applying \\(90\\%\\) sparsity to matrices \\(B\\), achieving a \\(95\\%\\) reduction in trainable parameters compared to LoRA. Despite tuning fewer parameters, LoRI-D and LoRI-S achieve performance comparable to - and even better than - LoRA and DoRA on NLU, math, code, and safety tasks. LoRI-D generally outperforms LoRI-S slightly, due to the extremely limited parameter budget in LoRI-S. Remarkably, LoRI-D and LoRI-S consistently outperform FFT, LoRA, and DoRA on code generation tasks. On HumanEval with Llama-3, LoRI-D achieves a pass@10 score of \\(63.2\\%\\), outperforming LoRA by \\(24.4\\%\\). LoRI-S achieves \\(59.6\\%\\) pass@10, exceeding LoRA by \\(17.3\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.812, + 0.827, + 0.926 + ], + "angle": 0, + "content": "The strong performance of LoRI-D suggests that effective adaptation can be achieved without updating \\( A \\), while the strong performance of LoRI-S indicates that \\( B \\) contains substantial parameter redundancy. LoRI's performance gains are attributed to the principled use of sparsity, which serves as a strong regularizer during adaptation. Additionally, LoRI preserves latent task-specific knowledge embedded in the pretrained model. This supports the view that supervised fine-tuning (SFT) primarily unlocks capabilities already present in pretrained models, rather than introducing new ones, which is consistent with findings from Liu et al. (2024); Yu et al. (2024)." + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.101, + 0.827, + 0.171 + ], + "angle": 0, + "content": "Table 3: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.204, + 0.184, + 0.799, + 0.354 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
ConcatLoRA85.057.813.020.022.384.4
LinearLoRA84.854.114.220.823.379.4
MagnitudeLoRA81.950.324.136.742.474.4
TIESLoRA72.624.032.546.351.777.8
DARELoRA79.148.934.148.753.574.1
ConcatLoRI-D83.255.840.556.962.286.6
LinearLoRI-D82.553.840.954.960.385.9
ConcatLoRI-S81.245.234.348.754.084.7
LinearLoRI-S79.141.323.236.642.378.8
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.381, + 0.345, + 0.399 + ], + "angle": 0, + "content": "3.3 Adapter Merging" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.409, + 0.827, + 0.537 + ], + "angle": 0, + "content": "We consider four heterogeneous tasks for LoRA and LoRI merging: NLU, math, code, and safety. This setting is generally more challenging than merging homogeneous adapters, such as merging multiple NLU adapters. Table 3 presents results for merging LoRAs and LoRIs on these four tasks. For LoRI, we apply concatenated and linear merging to the LoRI-D and LoRI-S variants. Pruning-based methods such as magnitude pruning, TIES, and DARE are not applied to LoRI, since these methods will prune the \\(A\\) matrices as LoRI already sparsifies \\(B\\), resulting in an inconsistent pruning scheme across \\(A\\) and \\(B\\). Additional results, including experiments on merging three adapters and evaluations of pruning-based methods on LoRI, are provided in Appendix E.4 and E.5." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.541, + 0.825, + 0.599 + ], + "angle": 0, + "content": "As shown in Table 3, directly merging LoRAs results in substantial performance degradation, particularly for code generation and safety alignment. Although pruning-based methods (e.g., DARE, TIES) improve code performance, they often compromise accuracy on other tasks. In contrast, LoRI achieves consistently strong performance across all tasks." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.604, + 0.825, + 0.689 + ], + "angle": 0, + "content": "Concatenated merging with LoRI-D achieves the best overall performance, closely matching the single-task baseline, which indicates minimal interference between LoRI adapters. For instance, it achieves \\(62.2\\%\\) pass@10 on HumanEval and an \\(86.6\\%\\) refusal rate on HExPHI. Despite using only \\(5\\%\\) of the parameters of LoRA, LoRI-S retains competitive performance. Notably, on code and safety tasks, concatenated merging with LoRI-S outperforms all LoRA merging methods." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.694, + 0.827, + 0.794 + ], + "angle": 0, + "content": "Linear merging with LoRI also performs competitively, though it lags slightly behind concatenated merging due to cross-term interactions that introduce some interference. LoRI eliminates the need for manual selection of merging methods: simple concatenated merging yields strong results. The choice between LoRI-D and LoRI-S can then be guided by the desired trade-off between performance and parameter efficiency. We also note an important trade-off between code generation performance and other domains during adapter merging, a phenomenon further explored in Section 3.5." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.813, + 0.363, + 0.83 + ], + "angle": 0, + "content": "3.4 Continual Learning" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.84, + 0.827, + 0.926 + ], + "angle": 0, + "content": "While merging adapters enables multi-task capabilities, it falls short of providing robust safety alignment in scenarios that demand strong safety guarantees. As shown in Table 3, the highest refusal rate on HEx-PHI achieved through LoRA or LoRI merging is \\(86.6\\%\\). To address this limitation, we adopt a two-phase training process: first, a safety adapter is trained on the safety alignment dataset Saerpaca; then, it is individually adapted to each downstream task, including NLU, math, and code." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.183, + 0.105, + 0.825, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.263, + 0.825, + 0.303 + ], + "angle": 0, + "content": "Figure 3: Continual learning results from safety to NLU, math, and code domains. Results for NLU are averaged over eight tasks. GSM8K accuracy, HumanEval pass@10, and HEX-PHI refusal rate are reported individually. Base model: Llama-3-8B, rank \\( r = 32 \\)." + }, + { + "type": "image", + "bbox": [ + 0.201, + 0.322, + 0.486, + 0.457 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.25, + 0.465, + 0.439, + 0.48 + ], + "angle": 0, + "content": "(a) Effect of calibration steps." + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.319, + 0.797, + 0.457 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.494, + 0.465, + 0.8, + 0.48 + ], + "angle": 0, + "content": "(b) Sparsity ratios across layers and projections." + }, + { + "type": "image", + "bbox": [ + 0.191, + 0.481, + 0.476, + 0.614 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.231, + 0.621, + 0.434, + 0.636 + ], + "angle": 0, + "content": "(c) Effect of mask granularities." + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.481, + 0.81, + 0.614 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.567, + 0.621, + 0.762, + 0.637 + ], + "angle": 0, + "content": "(d) Effect of merging weights." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.647, + 0.825, + 0.674 + ], + "angle": 0, + "content": "Figure 4: Ablation studies across different settings. Base model: Llama-3-8B, rank \\( r = 32 \\). Additional ablation studies are provided in Appendix F." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.699, + 0.827, + 0.852 + ], + "angle": 0, + "content": "Figure 3 presents results from these continual learning experiments. LoRA exhibits severe catastrophic forgetting on safety alignment – particularly in the safety \\(\\rightarrow\\) NLU experiment – likely due to the large size of the NLU training split (\\(\\sim 170\\mathrm{k}\\) examples). Among all methods, LoRI-S achieves the best preservation of safety alignment, even outperforming single-task LoRI-D. This is due to its \\(90\\%\\) sparsity in the \\(B\\) matrices, which enables isolated parameter updates between the initial safety alignment and subsequent task adaptations. LoRI-D also shows some resistance to forgetting, benefiting from frozen \\(A\\) matrices. For task adaptation, LoRI-D generally outperforms LoRI-S, as the latter's aggressive sparsity limits its adaptation capacity. Overall, LoRI offers a lightweight and effective approach to building safety adapters that preserve alignment while supporting adaptation to downstream tasks." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.87, + 0.341, + 0.884 + ], + "angle": 0, + "content": "3.5 Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.896, + 0.825, + 0.926 + ], + "angle": 0, + "content": "Calibration Steps. Calibration steps refer to the number of update steps used to generate sparse masks for each task. Figure 4(a) shows how performance of LoRI-S changes with" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.959 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.162 + ], + "angle": 0, + "content": "different numbers of calibration steps on math and code tasks. We observe that performance generally improves as the number of calibration steps increases. Since the masks only need to be calibrated once per task and can be reused, we use the entire adaptation dataset as the calibration dataset to achieve the best performance." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.176, + 0.827, + 0.277 + ], + "angle": 0, + "content": "Sparsity Ratio. We use model-wise masks in our experiments that retain the highest-magnitude parameters across all layers and projections. Figure 4(b) presents the sparsity ratios of different projection types (e.g., up, down, key, value) across layers under a \\(90\\%\\) sparsity on GSM8K. We observe that feedforward (FFN) projections tend to retain more parameters (i.e., lower sparsity) than self-attention projections, indicating they are more critical for adaptation. Additionally, the top layers are less sparse than lower layers, suggesting that the top layers play a more important role in adaptation." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.289, + 0.828, + 0.389 + ], + "angle": 0, + "content": "Mask Granularity. We compare five levels of mask granularity under \\(90\\%\\) sparsity on GSM8K, as shown in Figure 4(c). We compare module-wise, projection-wise, layer-wise, and matrix-wise masking against our model-wise masking, where parameters are selected within progressively smaller scopes. We find that coarse-grained masking (e.g., model-wise) yields the best performance, while fine-grained masking (e.g., matrix-wise) results in degradation. This suggests that global magnitude-based selection enables better parameter allocation, as the importance of projection matrices varies across the model." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.402, + 0.827, + 0.489 + ], + "angle": 0, + "content": "Merging Weights. We adopt uniform weights across all adapters for adapter merging, rather than task-specific weights, as we do not wish to prioritize any individual task. Figure 4(d) shows the effect of different merging weights (0.2, 0.3, 0.4) for concatenated merging with LoRI-S. We observe that LoRI is moderately sensitive to merging weights, with a noticeable trade-off between performance on code tasks and other domains. We adopt 0.3 for all adapters in LoRI-S merging, as it offers a balanced performance across domains." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.507, + 0.31, + 0.524 + ], + "angle": 0, + "content": "4 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.538, + 0.827, + 0.68 + ], + "angle": 0, + "content": "In this work, we introduced LoRI, a simple yet effective approach to parameter-efficient fine-tuning (PEFT) that substantially reduces trainable parameters while minimizing cross-task interference. By freezing the projection matrices \\(A\\) as random projections and sparsifying \\(B\\) using task-specific masks, LoRI achieves strong single-task performance across diverse domains – including natural language understanding, mathematical reasoning, code generation, and safety alignment – while reducing trainable parameters by up to \\(95\\%\\) compared to LoRA. Furthermore, LoRI enables training-free adapter merging with minimal performance degradation, and supports continual learning with significantly reduced catastrophic forgetting. It also provides a lightweight approach to building safety adapters that preserve the safety alignment of the base model." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.693, + 0.827, + 0.78 + ], + "angle": 0, + "content": "Future Work. We identify several promising avenues for extending this work. While LoRI currently leverages unstructured magnitude-based sparsity, future research can explore structured sparsity patterns – such as block sparsity, head pruning, or group-wise masking – which may offer better hardware compatibility. Additionally, although this study focuses on LLMs, the core design of LoRI is modality-agnostic. Extending LoRI to diffusion and vision-language models for multi-modal generation is a promising direction." + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.798, + 0.358, + 0.817 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.83, + 0.826, + 0.888 + ], + "angle": 0, + "content": "This material is based upon work partially supported by the NSF Grant No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.511, + 0.961 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.103, + 0.275, + 0.118 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.126, + 0.826, + 0.17 + ], + "angle": 0, + "content": "Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions. arXiv preprint arXiv:2309.07875, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.176, + 0.826, + 0.22 + ], + "angle": 0, + "content": "Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.226, + 0.826, + 0.283 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.29, + 0.661, + 0.308 + ], + "angle": 0, + "content": "Rich Caruana. Multitask learning. Machine learning, 28:41-75, 1997." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.313, + 0.825, + 0.342 + ], + "angle": 0, + "content": "Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.35, + 0.825, + 0.394 + ], + "angle": 0, + "content": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.401, + 0.825, + 0.457 + ], + "angle": 0, + "content": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.465, + 0.825, + 0.508 + ], + "angle": 0, + "content": "Alexandra Chronopoulou, Matthew E Peters, Alexander Fraser, and Jesse Dodge. *Adaptersoup: Weight averaging to improve generalization of pretrained language models.* arXiv preprint arXiv:2302.07027, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.515, + 0.825, + 0.558 + ], + "angle": 0, + "content": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.566, + 0.825, + 0.609 + ], + "angle": 0, + "content": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.616, + 0.825, + 0.66 + ], + "angle": 0, + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.666, + 0.825, + 0.71 + ], + "angle": 0, + "content": "Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.717, + 0.825, + 0.773 + ], + "angle": 0, + "content": "Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition. arXiv preprint arXiv:2310.05492, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.781, + 0.825, + 0.824 + ], + "angle": 0, + "content": "Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, and Jia Li. Parameter-efficient fine-tuning with discrete fourier transform. arXiv preprint arXiv:2405.03003, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.832, + 0.825, + 0.875 + ], + "angle": 0, + "content": "Aaron Grattaftori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.882, + 0.825, + 0.925 + ], + "angle": 0, + "content": "Han Guo, Philip Greengard, Eric P Xing, and Yoon Kim. Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning. arXiv preprint arXiv:2311.12023, 2023." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.126, + 0.826, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.133 + ], + "angle": 0, + "content": "Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models. arXiv preprint arXiv:2402.12354, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.141, + 0.826, + 0.186 + ], + "angle": 0, + "content": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.192, + 0.826, + 0.248 + ], + "angle": 0, + "content": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Larous-silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790-2799. PMLR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.258, + 0.826, + 0.302 + ], + "angle": 0, + "content": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.309, + 0.826, + 0.354 + ], + "angle": 0, + "content": "Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka-Wei Lee. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.361, + 0.826, + 0.404 + ], + "angle": 0, + "content": "Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.412, + 0.826, + 0.455 + ], + "angle": 0, + "content": "Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv preprint arXiv:2212.04089, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.464, + 0.826, + 0.494 + ], + "angle": 0, + "content": "Leonardo Iurada, Marco Ciccone, and Tatiana Tommasi. Efficient model editing with task-localized sparse fine-tuning. arXiv preprint arXiv:2504.02620, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.501, + 0.826, + 0.545 + ], + "angle": 0, + "content": "Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.552, + 0.826, + 0.583 + ], + "angle": 0, + "content": "Damjan Kalajdzievski. A rank stabilization scaling factor for fine-tuning with lora. arXiv preprint arXiv:2312.03732, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.59, + 0.826, + 0.634 + ], + "angle": 0, + "content": "Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. Parameter-level soft-masking for continual learning. In International Conference on Machine Learning, pp. 17492-17505. PMLR, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.641, + 0.826, + 0.672 + ], + "angle": 0, + "content": "Dawid J Kopiczko, Tijmen Blankevoort, and Yuki M Asano. Vera: Vector-based random matrix adaptation. arXiv preprint arXiv:2310.11454, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.679, + 0.826, + 0.711 + ], + "angle": 0, + "content": "Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.717, + 0.826, + 0.748 + ], + "angle": 0, + "content": "Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.755, + 0.826, + 0.785 + ], + "angle": 0, + "content": "Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.793, + 0.826, + 0.822 + ], + "angle": 0, + "content": "Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.83, + 0.826, + 0.86 + ], + "angle": 0, + "content": "Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.867, + 0.826, + 0.926 + ], + "angle": 0, + "content": "Zujie Liang, Feng Wei, Yin Jie, Yuxi Qian, Zhenghong Hao, and Bing Han. Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 277-292, 2023." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.148 + ], + "angle": 0, + "content": "Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, 35:1950-1965, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.825, + 0.201 + ], + "angle": 0, + "content": "Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. In *Forty-first International Conference on Machine Learning*, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.209, + 0.825, + 0.254 + ], + "angle": 0, + "content": "Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.261, + 0.825, + 0.294 + ], + "angle": 0, + "content": "David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.3, + 0.825, + 0.344 + ], + "angle": 0, + "content": "Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.353, + 0.825, + 0.398 + ], + "angle": 0, + "content": "Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7765-7773, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.406, + 0.824, + 0.438 + ], + "angle": 0, + "content": "Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems, 35:17703-17716, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.445, + 0.825, + 0.489 + ], + "angle": 0, + "content": "Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.498, + 0.825, + 0.542 + ], + "angle": 0, + "content": "Fanxu Meng, Zhaohui Wang, and Muhan Zhang. Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37:121038-121072, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.55, + 0.825, + 0.594 + ], + "angle": 0, + "content": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.603, + 0.825, + 0.647 + ], + "angle": 0, + "content": "Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 36:66727-66754, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.656, + 0.825, + 0.714 + ], + "angle": 0, + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.723, + 0.825, + 0.767 + ], + "angle": 0, + "content": "Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, and Pra-tek Mittal. Lottery ticket adaptation: Mitigating destructive interference in llms. arXiv preprint arXiv:2406.16797, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.775, + 0.825, + 0.819 + ], + "angle": 0, + "content": "Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.828, + 0.825, + 0.872 + ], + "angle": 0, + "content": "Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, and Samy Jelassi. Lora soups: Merging loras for practical skill composition tasks. arXiv preprint arXiv:2410.13025, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.881, + 0.825, + 0.925 + ], + "angle": 0, + "content": "Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693, 2023." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.147 + ], + "angle": 0, + "content": "Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic forgetting in neural networks. In International Conference on Learning Representations, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.826, + 0.199 + ], + "angle": 0, + "content": "David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.207, + 0.826, + 0.251 + ], + "angle": 0, + "content": "Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.259, + 0.826, + 0.303 + ], + "angle": 0, + "content": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9): 99-106, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.311, + 0.826, + 0.342 + ], + "angle": 0, + "content": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.35, + 0.826, + 0.381 + ], + "angle": 0, + "content": "Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.388, + 0.826, + 0.419 + ], + "angle": 0, + "content": "Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.426, + 0.826, + 0.457 + ], + "angle": 0, + "content": "George Stoica, Pratik Ramesh, Boglarka Ecsedi, Leshem Choshen, and Judy Hoffman. Model merging with svd to tie the knots. arXiv preprint arXiv:2410.19735, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.465, + 0.826, + 0.508 + ], + "angle": 0, + "content": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.516, + 0.826, + 0.56 + ], + "angle": 0, + "content": "Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, and Cheng-Zhong Xu. Hydralora: An asymmetric lora architecture for efficient fine-tuning. Advances in Neural Information Processing Systems, 37:9565-9584, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.569, + 0.826, + 0.624 + ], + "angle": 0, + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.634, + 0.826, + 0.679 + ], + "angle": 0, + "content": "Hanqing Wang, Bowen Ping, Shuo Wang, Xu Han, Yun Chen, Zhiyuan Liu, and Maosong Sun. Lora-flow: Dynamic lora fusion for large language models in generative tasks. arXiv preprint arXiv:2402.11455, 2024a." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.687, + 0.826, + 0.731 + ], + "angle": 0, + "content": "Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.739, + 0.826, + 0.783 + ], + "angle": 0, + "content": "Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.791, + 0.826, + 0.835 + ], + "angle": 0, + "content": "Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan Fang Li, Guilin Qi, and Gholamreza Haffari. Pretrained language model in continual learning: A comparative study. In International Conference on Learning Representations 2022. OpenReview, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.843, + 0.826, + 0.873 + ], + "angle": 0, + "content": "Xun Wu, Shaohan Huang, and Furu Wei. Mixture of lora experts. arXiv preprint arXiv:2404.13628, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.881, + 0.826, + 0.925 + ], + "angle": 0, + "content": "Feng Xiong, Runxi Cheng, Wang Chen, Zhanqiu Zhang, Yiwen Guo, Chun Yuan, and Ruifeng Xu. Multi-task model merging via adaptive weight disentanglement. arXiv preprint arXiv:2411.18729, 2024." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.147 + ], + "angle": 0, + "content": "Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36:7093-7115, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.825, + 0.199 + ], + "angle": 0, + "content": "Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.207, + 0.825, + 0.237 + ], + "angle": 0, + "content": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.244, + 0.824, + 0.288 + ], + "angle": 0, + "content": "Feiyu Zhang, Liangzhi Li, Junhao Chen, Zhouqiang Jiang, Bowen Wang, and Yiming Qian. Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning. arXiv preprint arXiv:2308.12043, 2023a." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.296, + 0.825, + 0.338 + ], + "angle": 0, + "content": "Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, and Bo Li. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv preprint arXiv:2308.03303, 2023b." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.348, + 0.825, + 0.391 + ], + "angle": 0, + "content": "Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, and Bohan Zhuang. Loraprune: Pruning meets low-rank parameter-efficient fine-tuning. arXiv preprint arXiv:2305.18403, 2023c." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.399, + 0.825, + 0.443 + ], + "angle": 0, + "content": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512, 2023d." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.451, + 0.824, + 0.493 + ], + "angle": 0, + "content": "Hongyun Zhou, Xiangyu Lu, Wang Xu, Conghui Zhu, Tiejun Zhao, and Muyun Yang. Lora-drop: Efficient lora parameter pruning based on output evaluation. arXiv preprint arXiv:2402.07721, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.502, + 0.825, + 0.56 + ], + "angle": 0, + "content": "Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez De Ocariz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, and Justin Solomon. Asymmetry in low-rank adapters of foundation models. arXiv preprint arXiv:2402.16842, 2024." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.56 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.102, + 0.343, + 0.118 + ], + "angle": 0, + "content": "A Related Works" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.135, + 0.827, + 0.373 + ], + "angle": 0, + "content": "Parameter-Efficient Fine-Tuning. Parameter-efficient fine-tuning (PEFT) methods for LLMs (Houlsby et al., 2019; Pfeiffer et al., 2020; Li & Liang, 2021; Lester et al., 2021; Liu et al., 2021; Hu et al., 2021) have received increasing attention in recent years. Among them, LoRA (Hu et al., 2021), which introduces trainable low-rank matrices, has become one of the most widely adopted PEFT methods due to its strong performance and efficiency. LoRI is motivated by reducing parameter redundancy in LoRA through an asymmetric design: we freeze the projection matrices \\( A \\) and enforce sparsity on the matrices \\( B \\). Our work is closely related to several lines of research. In terms of parameter efficiency, our goal is shared by methods such as IA3 (Liu et al., 2022), VeRA (Kopiczko et al., 2023), and FourierFT (Gao et al., 2024). More specifically, our approach builds on the concept of asymmetric LoRA variants, which has been explored in works like LoRA-FA (Zhang et al., 2023b), AsymmetryLoRA (Zhu et al., 2024), and HydraLoRA (Tian et al., 2024). However, LoRI is distinct from these works by uniquely combining frozen \\( A \\) with sparsely updated \\( B \\). This targeted, asymmetric pruning of only the \\( B \\) matrices also differentiates our method from general LoRA pruning techniques like Loraprune (Zhang et al., 2023c), LoRADrop (Zhou et al., 2024), and SoRA (Ding et al., 2023), as well as SVD-based approaches such as AdaLoRA (Zhang et al., 2023d) and PiSSA (Meng et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.39, + 0.828, + 0.628 + ], + "angle": 0, + "content": "Model Merging. Achieving multi-task capabilities typically involves training on a mixture of diverse task datasets (Caruana, 1997; Sener & Koltun, 2018), which is often prohibitively expensive in time and compute. As an alternative, model merging has gained attention for combining multiple task-specific models into a single model (Matena & Raffel, 2022; Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). Fisher Merging (Matena & Raffel, 2022) uses weights from the Fisher information matrix to combine parameters, while Task Arithmetic (Ilharco et al., 2022) employs predefined scaling factors. TIES-Merging (Yadav et al., 2023) prunes low-magnitude parameters and merges those with consistent signs, and DARE (Yu et al., 2024) applies random pruning with rescaling. However, identifying the optimal merging method often requires trial and error. More recently, there has been growing interest in merging task-specific LoRA adapters (Chronopoulou et al., 2023; Huang et al., 2023; Wu et al., 2024; Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024), often utilizing Mixture-of-Experts (MoE) architectures. Nonetheless, these methods typically require additional training to coordinate the adapters effectively. In contrast, LoRI eliminates the need for manual selection of merging methods or additional training. By ensuring approximate orthogonality between adapters, LoRI minimizes interference and preserves task-specific performance." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.644, + 0.827, + 0.854 + ], + "angle": 0, + "content": "Catastrophic Forgetting. Catastrophic forgetting is a fundamental challenge in continual learning (McCloskey & Cohen, 1989; Ramasesh et al., 2021; Liang et al., 2023; Wang et al., 2024b), where neural networks struggle to retain previously learned knowledge when adapting to new tasks. Wu et al. (2022) analyzed this phenomenon using layer-wise and task-wise probing to assess knowledge retention across tasks. Several studies (Dong et al., 2023; Luo et al., 2023) have empirically examined catastrophic forgetting in the continual fine-tuning of LLMs. To mitigate catastrophic forgetting, various approaches have been proposed. Rehearsal-based methods (Rolnick et al., 2019; Shin et al., 2017) store or generate past data to reinforce prior knowledge during training. Parameter isolation methods (Rusu et al., 2016; Mallya & Lazebnik, 2018; Konishi et al., 2023; Panda et al., 2024) allocate separate subnetworks or sparsely mask parameters for different tasks to prevent interference. Additionally, O-LoRA (Wang et al., 2023) learns tasks in distinct low-rank subspaces while ensuring orthogonality between them. LoRI falls under the category of parameter isolation methods, leveraging sparse task-specific masks to mitigate catastrophic forgetting during continual learning." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.876, + 0.377, + 0.894 + ], + "angle": 0, + "content": "B Algorithm of LoRI" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.909, + 0.593, + 0.926 + ], + "angle": 0, + "content": "The full procedure of LoRI is summarized in Algorithm 1." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "code_caption", + "bbox": [ + 0.174, + 0.119, + 0.564, + 0.136 + ], + "angle": 0, + "content": "Algorithm 1: LoRA with Reduced Interference (LoRI)" + }, + { + "type": "algorithm", + "bbox": [ + 0.174, + 0.141, + 0.826, + 0.511 + ], + "angle": 0, + "content": "Require: Task \\(t\\) , mask calibration dataset \\(\\mathcal{D}_t^C\\) , adaptation dataset \\(\\mathcal{D}_t\\) , sparsity ratio \\(s\\) , model \\(f\\) loss function \\(\\mathcal{L}_t\\) , learning rate \\(\\eta_t\\) \n1: for each layer \\(l = 1,\\ldots ,L\\) do \n2: for each projection \\(m = 1,\\dots ,M\\) do \n3: Initialize: \\(A_{t}^{(l,m)}\\in \\mathbb{R}^{d_{\\mathrm{in}}\\times r}\\leftarrow \\mathcal{U}(-\\sqrt{\\frac{3}{d_{\\mathrm{in}}}},\\sqrt{\\frac{3}{d_{\\mathrm{in}}}}),B_{t}^{(l,m)}\\in \\mathbb{R}^{r\\times d_{\\mathrm{out}}}\\leftarrow 0\\) \n4: end for \n5: end for \n6: for each batch \\((x,y)\\) sampled from \\(\\mathcal{D}_t^C\\) do ▷ Calibration steps \n7: for each \\((l,m)\\) do \n8: \\(B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))\\) \n9: end for \n10: end for \n11: \\(\\tau_t\\gets \\mathrm{Quantile}_s\\left(\\bigcup_{l,m}|B_t^{(l,m)}|\\right)\\) ▷ Compute global threshold \\(\\tau_t\\) \n12: for each \\((l,m)\\) do \n13: \\(M_t^{(l,m)}\\gets \\mathbb{I}\\left(|B_t^{(l,m)}|\\geq \\tau_t\\right)\\) ▷ Generate mask for top- \\((1 - s)\\%\\) entries \n14: \\(B_{t}^{(l,m)}\\gets 0\\) ▷ Reset to zero before adaptation \n15: end for \n16: for each batch \\((x,y)\\) sampled from \\(\\mathcal{D}_t\\) do ▷ Adaptation steps \n17: for each \\((l,m)\\) do \n18: \\(B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\left(\\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))\\odot M_{t}^{(l,m)}\\right)\\) \n19: end for \n20: end for" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.537, + 0.379, + 0.556 + ], + "angle": 0, + "content": "C Proof of Property 1" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.57, + 0.825, + 0.603 + ], + "angle": 0, + "content": "Proof. Our goal is to show that the Frobenius inner product \\(\\langle \\Delta_s, \\Delta_t \\rangle_F\\) converges to zero in probability. Let \\(\\tilde{B}_s = B_s \\odot M_s\\) and \\(\\tilde{B}_t = B_t \\odot M_t\\). The inner product is given by:" + }, + { + "type": "equation", + "bbox": [ + 0.351, + 0.606, + 0.825, + 0.627 + ], + "angle": 0, + "content": "\\[\n\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} = \\operatorname {T r} \\left(\\Delta_ {s} ^ {\\top} \\Delta_ {t}\\right) = \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} A _ {s} ^ {\\top} A _ {t} \\tilde {B} _ {t}\\right). \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.632, + 0.825, + 0.663 + ], + "angle": 0, + "content": "We will prove this by showing that the random matrix \\( X = A_{s}^{\\top}A_{t} \\) converges to the zero matrix in probability as \\( d_{\\mathrm{in}} \\to \\infty \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.669, + 0.826, + 0.738 + ], + "angle": 0, + "content": "Let \\( a_{s}^{k}, a_{t}^{l} \\in \\mathbb{R}^{d_{\\mathrm{in}}} \\) be the \\( k \\)-th and \\( l \\)-th columns of \\( A_{s} \\) and \\( A_{t} \\), respectively. The entries of these vectors are i.i.d. from a Kaiming Uniform distribution \\( U[-a, a] \\) where \\( a = \\sqrt{3 / d_{\\mathrm{in}}} \\). This implies a mean of 0 and variance of \\( \\sigma^2 = a^2 / 3 = 1 / d_{\\mathrm{in}} \\). An entry of \\( X \\) is the inner product \\( X_{kl} = (a_{s}^{k})^{\\top} a_{t}^{l} = \\sum_{i=1}^{d_{\\mathrm{in}}} (A_{s})_{ik} (A_{t})_{il} \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.743, + 0.826, + 0.793 + ], + "angle": 0, + "content": "Let \\( Z_{i} = (A_{s})_{ik}(A_{t})_{il} \\). The terms \\( Z_{i} \\) are i.i.d. with \\( \\mathbb{E}[Z_i] = \\mathbb{E}[(A_s)_{ik}]\\mathbb{E}[(A_t)_{il}] = 0 \\). Each term is bounded: \\( |Z_{i}| \\leq a^{2} = 3 / d_{\\mathrm{in}} \\). We apply Hoeffding's inequality to the sum \\( \\sum_{i=1}^{d_{\\mathrm{in}}} Z_{i} \\), where each term lies in \\( [-3 / d_{\\mathrm{in}}, 3 / d_{\\mathrm{in}}] \\):" + }, + { + "type": "equation", + "bbox": [ + 0.2, + 0.798, + 0.826, + 0.842 + ], + "angle": 0, + "content": "\\[\n\\mathbb {P} \\left(\\left| X _ {k l} \\right| \\geq t\\right) = \\mathbb {P} \\left(\\left| \\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} Z _ {i} \\right| \\geq t\\right) \\leq 2 \\exp \\left(\\frac {- 2 t ^ {2}}{\\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} (6 / d _ {\\mathrm {i n}}) ^ {2}}\\right) = 2 \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.853, + 0.825, + 0.883 + ], + "angle": 0, + "content": "We now bound the probability that any of the \\( r^2 \\) entries of \\( X \\) exceeds a threshold \\( t \\) using the union bound:" + }, + { + "type": "equation", + "bbox": [ + 0.183, + 0.886, + 0.826, + 0.929 + ], + "angle": 0, + "content": "\\[\n\\mathbb {P} \\left(\\max _ {k, l} | X _ {k l} | \\geq t\\right) = \\mathbb {P} \\left(\\bigcup_ {k, l = 1} ^ {r} \\{| X _ {k l} | \\geq t \\}\\right) \\leq \\sum_ {k, l = 1} ^ {r} \\mathbb {P} \\left(| X _ {k l} | \\geq t\\right) \\leq 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {11}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.279, + 0.101, + 0.719, + 0.117 + ], + "angle": 0, + "content": "Table 4: Hyperparameter settings for LoRI on NLU datasets." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.127, + 0.825, + 0.291 + ], + "angle": 0, + "content": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-41e-51e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.319, + 0.825, + 0.351 + ], + "angle": 0, + "content": "We can now show that \\( \\| X \\|_F \\) is small with high probability. Let the failure probability be \\( \\delta \\). By setting the bound from the previous step to \\( \\delta \\), we can solve for \\( t \\):" + }, + { + "type": "equation", + "bbox": [ + 0.317, + 0.363, + 0.826, + 0.403 + ], + "angle": 0, + "content": "\\[\n\\delta = 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right) \\Longrightarrow t = \\sqrt {\\frac {1 8 \\log \\left(2 r ^ {2} / \\delta\\right)}{d _ {\\mathrm {i n}}}}. \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.415, + 0.827, + 0.445 + ], + "angle": 0, + "content": "With probability at least \\(1 - \\delta\\), we have \\(\\max_{k,l} |X_{kl}| \\leq t\\). This allows us to bound the Frobenius norm of \\(X\\):" + }, + { + "type": "equation", + "bbox": [ + 0.34, + 0.456, + 0.826, + 0.492 + ], + "angle": 0, + "content": "\\[\n\\left\\| X \\right\\| _ {F} ^ {2} = \\sum_ {k, l = 1} ^ {r} \\left| X _ {k l} \\right| ^ {2} \\leq r ^ {2} \\left(\\max _ {k, l} \\left| X _ {k l} \\right|\\right) ^ {2} \\leq r ^ {2} t ^ {2}. \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.505, + 0.438, + 0.521 + ], + "angle": 0, + "content": "Thus, with probability at least \\(1 - \\delta\\):" + }, + { + "type": "equation", + "bbox": [ + 0.315, + 0.534, + 0.826, + 0.577 + ], + "angle": 0, + "content": "\\[\n\\| X \\| _ {F} \\leq r \\cdot t = r \\sqrt {\\frac {1 8 \\log (2 r ^ {2} / \\delta)}{d _ {\\mathrm {i n}}}} = O \\left(r \\sqrt {\\frac {\\log r}{d _ {\\mathrm {i n}}}}\\right). \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.589, + 0.825, + 0.619 + ], + "angle": 0, + "content": "Since \\( r \\ll d_{\\mathrm{in}} \\), the term \\( \\| X \\|_F \\to 0 \\) as \\( d_{\\mathrm{in}} \\to \\infty \\). This shows that \\( X \\) converges to the zero matrix in probability." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.624, + 0.826, + 0.666 + ], + "angle": 0, + "content": "Finally, we bound the magnitude of the original inner product using the Cauchy-Schwarz inequality for the Frobenius inner product and the sub-multiplicative property of the Frobenius norm:" + }, + { + "type": "equation", + "bbox": [ + 0.348, + 0.671, + 0.825, + 0.73 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\left| \\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\right| = \\left| \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} X \\tilde {B} _ {t}\\right) \\right| = \\left| \\left\\langle \\tilde {B} _ {s}, X \\tilde {B} _ {t} \\right\\rangle_ {F} \\right| \\\\ \\leq \\left\\| \\tilde {B} _ {s} \\right\\| _ {F} \\| X \\tilde {B} _ {t} \\| _ {F} \\tag {15} \\\\ \\leq \\| \\tilde {B} _ {s} \\| _ {F} \\| X \\| _ {F} \\| \\tilde {B} _ {t} \\| _ {F}. \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.742, + 0.827, + 0.786 + ], + "angle": 0, + "content": "The norms \\(\\| \\tilde{B}_s\\| _F\\) and \\(\\| \\tilde{B}_t\\| _F\\) are finite, as determined by the trained adapters. Since we have shown that \\(\\| X\\| _F\\to 0\\) in probability, the entire expression must also converge to 0 in probability." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.811, + 0.438, + 0.83 + ], + "angle": 0, + "content": "D Hyperparameter Settings" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.846, + 0.825, + 0.89 + ], + "angle": 0, + "content": "We summarize the hyperparameter settings used for LoRI in Tables 4, 5, 6, and 7. These include settings for different tasks (NLU, math, code, safety), adapter variants (LoRI-D, LoRI-S), base models (Llama-3-8B and Mistral-7B), and ranks (32 and 64)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.896, + 0.826, + 0.926 + ], + "angle": 0, + "content": "For the merging experiments, the hyperparameter settings for merging four adapters are provided in Tables 8 and 9, while those for merging three adapters are provided in Table 10." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.239, + 0.104, + 0.759, + 0.121 + ], + "angle": 0, + "content": "Table 5: Hyperparameter settings for LoRI on the math dataset GSM8K." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.13, + 0.825, + 0.294 + ], + "angle": 0, + "content": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α646412812864643264
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-35e-55e-41e-45e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs3
Whereq, k, v, o, gate, up, down
" + }, + { + "type": "table_caption", + "bbox": [ + 0.223, + 0.307, + 0.773, + 0.324 + ], + "angle": 0, + "content": "Table 6: Hyperparameter settings for LoRI on the code dataset CodeAlpaca." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.333, + 0.825, + 0.497 + ], + "angle": 0, + "content": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs2
Whereq, k, v, o, gate, up, down
" + }, + { + "type": "table_caption", + "bbox": [ + 0.229, + 0.51, + 0.767, + 0.527 + ], + "angle": 0, + "content": "Table 7: Hyperparameter settings for LoRI on the safety dataset Saferpaca." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.536, + 0.825, + 0.699 + ], + "angle": 0, + "content": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
" + }, + { + "type": "table_caption", + "bbox": [ + 0.212, + 0.713, + 0.783, + 0.73 + ], + "angle": 0, + "content": "Table 8: Hyperparameter settings for merging four adapters using Llama-3-8B." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.738, + 0.825, + 0.807 + ], + "angle": 0, + "content": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
" + }, + { + "type": "table_caption", + "bbox": [ + 0.217, + 0.82, + 0.78, + 0.837 + ], + "angle": 0, + "content": "Table 9: Hyperparameter settings for merging four adapters using Mistral-7B." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.846, + 0.825, + 0.919 + ], + "angle": 0, + "content": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelMistralMistralMistralMistralMistralMistralMistralMistralMistral
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.206, + 0.101, + 0.791, + 0.117 + ], + "angle": 0, + "content": "Table 10: Hyperparameter settings for merging three adapters using Llama-3-8B." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.127, + 0.825, + 0.196 + ], + "angle": 0, + "content": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.50.50.50.50.50.50.50.40.4
Density--0.30.70.7----
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.218, + 0.825, + 0.261 + ], + "angle": 0, + "content": "Table 11: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 with \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.273, + 0.825, + 0.429 + ], + "angle": 0, + "content": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
VeRA1.38M (0.02%)64.481.862.667.385.760.978.556.969.8
IA31.70M (0.02%)68.684.874.577.689.475.790.675.079.5
LoRA-FA44M (0.54%)74.089.683.383.893.488.696.187.487.0
AdaLoRA84M (1.03%)75.689.282.483.191.087.894.487.686.4
rsLoRA84M (1.03%)72.884.878.876.087.085.091.082.882.3
PiSSA84M (1.03%)68.184.478.275.185.182.889.382.880.7
LoRA+84M (1.03%)67.080.378.570.182.381.588.979.778.5
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.464, + 0.502, + 0.481 + ], + "angle": 0, + "content": "E Additional Experimental Results" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.503, + 0.544, + 0.518 + ], + "angle": 0, + "content": "E.1 Comparison with Additional PEFT Methods" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.535, + 0.825, + 0.621 + ], + "angle": 0, + "content": "To provide a comprehensive benchmark, we evaluate LoRI against several widely adopted parameter-efficient fine-tuning (PEFT) methods, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), LoRA+ (Hayou et al., 2024), and DoRA (Liu et al., 2024). The results, presented in Tables 11 and 12, demonstrate that our proposed methods are highly effective." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.625, + 0.827, + 0.766 + ], + "angle": 0, + "content": "LoRI-D, which uses 44M trainable parameters (0.54% of the full model and half of LoRA's), consistently achieves state-of-the-art performance, particularly on NLU and code generation benchmarks. LoRI-S, despite its aggressive sparsity (0.05% of the full model and 5% of LoRA's), remains highly competitive and often surpasses other PEFT methods. While VeRA and IA3 are more parameter-efficient, their performance is substantially lower than LoRI-S. Despite this efficiency, LoRI-D and LoRI-S deliver comparable – and often superior – performance across NLU, math, code, and safety domains. These results underscore two key insights: (1) effective adaptation does not require updating the projection matrices \\( A \\), as demonstrated by LoRI-D; and (2) the matrices \\( B \\) contains significant redundancy that can be effectively pruned, as shown by LoRI-S." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.795, + 0.405, + 0.809 + ], + "angle": 0, + "content": "E.2 Results with Rank \\( r = 64 \\)" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.827, + 0.825, + 0.926 + ], + "angle": 0, + "content": "We evaluate several adaptation methods using a higher adapter rank of \\( r = 64 \\) across a diverse set of tasks. This allows for more expressive adapter representations while still maintaining efficiency compared to full fine-tuning. Table 13 presents performance on eight natural language understanding (NLU) benchmarks, while Table 14 includes results on GSM8K (math), HumanEval (code), and HEx-PHI (safety). Across Llama-3 and Mistral models, LoRI-D and LoRI-S consistently perform competitively, often outperforming larger adapter methods like LoRA and DoRA, while using fewer parameters." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.114, + 0.825, + 0.158 + ], + "angle": 0, + "content": "Table 12: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEX-PHI (safety) benchmarks using Llama-3 with \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.225, + 0.169, + 0.778, + 0.359 + ], + "angle": 0, + "content": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
VeRA1.38M (0.02%)30.632.445.150.974.7
IA31.70M (0.02%)48.032.745.651.585.4
LoRA-FA44M (0.54%)64.842.957.564.294.1
AdaLoRA84M (1.03%)63.333.545.049.491.9
rsLoRA84M (1.03%)61.328.435.538.398.1
PiSSA84M (1.03%)61.332.040.343.397.8
LoRA+84M (1.03%)61.733.042.746.098.8
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.392, + 0.825, + 0.436 + ], + "angle": 0, + "content": "Table 13: Performance comparison of different adaptation methods on eight natural language understanding (NLU) benchmarks using Llama-3 and Mistral with \\( r = 64 \\). **Bold indicates the best-performing method, and underline indicates the second-best.**" + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.446, + 0.825, + 0.613 + ], + "angle": 0, + "content": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA168M (2.05%)75.289.081.282.392.489.195.388.286.6
DoRA169M (2.06%)76.489.082.082.692.387.595.187.386.5
LoRI-D88M (1.07%)75.890.482.783.392.688.695.987.487.1
LoRI-S8.8M (0.11%)76.590.281.983.593.887.596.287.287.1
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA168M (2.26%)77.490.283.584.093.089.395.689.487.8
DoRA169M (2.28%)76.090.683.583.392.889.695.787.687.4
LoRI-D88M (1.18%)75.990.783.782.092.190.096.487.887.3
LoRI-S8.8M (0.12%)74.290.783.583.092.689.595.889.587.3
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.645, + 0.825, + 0.701 + ], + "angle": 0, + "content": "Table 14: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with \\( r = 64 \\). **Bold indicates the best-performing method, and **underline indicates the second-best.**" + }, + { + "type": "table", + "bbox": [ + 0.234, + 0.713, + 0.77, + 0.91 + ], + "angle": 0, + "content": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA168M (2.05%)63.938.652.959.294.1
DoRA169M (2.06%)63.839.453.659.793.4
LoRI-D88M (1.07%)63.841.955.460.396.6
LoRI-S8.8M (0.11%)61.844.157.462.496.3
Mistral-7B
FFT7.24G (100%)55.530.539.341.794.1
LoRA168M (2.26%)56.733.943.146.995.9
DoRA169M (2.28%)57.832.943.347.296.6
LoRI-D88M (1.18%)58.233.343.647.390.9
LoRI-S8.8M (0.12%)58.432.142.246.393.4
" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.101, + 0.827, + 0.171 + ], + "angle": 0, + "content": "Table 15: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank \\( r = 32 \\). Bold indicates the best-performing method, and underline indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.204, + 0.183, + 0.799, + 0.354 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
ConcatLoRA82.552.432.340.844.175.6
LinearLoRA81.448.033.141.643.976.6
MagnitudeLoRA77.542.732.741.845.680.9
TIESLoRA31.323.532.040.243.581.9
DARELoRA76.143.032.041.044.683.4
ConcatLoRI-D79.352.434.442.845.583.8
LinearLoRI-D78.150.535.242.745.579.7
ConcatLoRI-S79.246.133.341.645.979.4
LinearLoRI-S75.540.328.836.039.683.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.366, + 0.825, + 0.424 + ], + "angle": 0, + "content": "Table 16: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.434, + 0.825, + 0.576 + ], + "angle": 0, + "content": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA73.989.181.181.492.483.094.484.585.0
LinearLoRA73.788.881.180.791.684.493.984.184.8
MagnitudeLoRA72.087.176.879.491.781.590.476.481.9
TIESLoRA68.283.867.369.587.869.273.361.472.6
DARELoRA70.785.074.177.590.776.686.871.079.1
ConcatLoRI-D74.087.777.881.092.481.092.778.983.2
LinearLoRI-D73.787.776.780.392.180.192.077.782.5
ConcatLoRI-S71.886.276.179.291.578.689.876.381.2
LinearLoRI-S70.785.375.178.090.875.086.571.379.1
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.6, + 0.391, + 0.617 + ], + "angle": 0, + "content": "E.3 Merging Four Adapters" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.626, + 0.825, + 0.726 + ], + "angle": 0, + "content": "To support multi-task learning within a unified model, we study the merging of four task-specific adapters using various strategies. Table 15 reports results using Mistral-7B across a range of tasks. Additionally, Tables 16 and 17 break down the performance of NLU on individual benchmarks using Llama-3 and Mistral, respectively. We compare merging methods such as concatenated merging, linear merging, magnitude pruning, TIES, and DARE. LoRI-based approaches demonstrate strong performance and stability when merging multiple adapters." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.742, + 0.4, + 0.759 + ], + "angle": 0, + "content": "E.4 Merging Three Adapters" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.768, + 0.825, + 0.84 + ], + "angle": 0, + "content": "We further evaluate the merging of three adapters to understand performance when adapting to a smaller set of tasks. Tables 18 and 19 summarize the results for Llama-3 across different benchmarks. Similar to the four-task setting, LoRI-D remains a strong performer, often exceeding the performance of LoRA. These results highlight that LoRI-based methods are effective with varying levels of task diversity." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.856, + 0.466, + 0.872 + ], + "angle": 0, + "content": "E.5 Pruning-Based Merging Methods" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.882, + 0.825, + 0.927 + ], + "angle": 0, + "content": "Finally, we explore pruning-based merging methods, which aim to compress and combine multiple adapters by selectively retaining important weights. We focus on three methods: magnitude pruning, TIES, and DARE. Results are reported for merging both four-adapter" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.123, + 0.825, + 0.18 + ], + "angle": 0, + "content": "Table 17: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Mistral-7B, rank \\( r = 32 \\). Bold indicates the best-performing method, and underline indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.192, + 0.825, + 0.334 + ], + "angle": 0, + "content": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D75.990.683.083.691.988.495.987.487.1
ConcatLoRA69.088.078.179.990.984.292.477.882.5
LinearLoRA69.286.977.978.590.282.191.575.181.4
MagnitudeLoRA68.784.974.475.989.177.585.664.177.5
TIESLoRA18.469.840.714.021.920.114.650.931.3
DARELoRA69.484.373.174.288.974.382.661.876.1
ConcatLoRI-D68.485.975.676.689.481.385.971.179.3
LinearLoRI-D66.386.074.975.388.980.885.068.078.1
ConcatLoRI-S72.685.474.676.589.780.186.068.979.2
LinearLoRI-S67.683.872.073.088.374.680.964.375.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.385, + 0.825, + 0.456 + ], + "angle": 0, + "content": "Table 18: Comparison of merging methods for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank \\( r = 32 \\). Bold indicates the best-performing method, and underline indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.243, + 0.467, + 0.761, + 0.639 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
ConcatLoRA86.454.513.019.821.8
LinearLoRA86.151.98.814.516.7
MagnitudeLoRA83.852.023.337.443.0
TIESLoRA79.426.936.348.753.7
DARELoRA81.153.336.049.553.9
ConcatLoRI-D84.859.641.556.461.6
LinearLoRI-D84.657.638.351.656.8
ConcatLoRI-S83.351.831.244.649.8
LinearLoRI-S81.041.726.640.044.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.69, + 0.825, + 0.748 + ], + "angle": 0, + "content": "Table 19: Comparison of merging methods for combining three adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank \\( r = 32 \\). **Bold** indicates the best-performing method, and **underline** indicates the second-best." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.758, + 0.825, + 0.9 + ], + "angle": 0, + "content": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA74.789.681.882.993.786.295.886.886.4
LinearLoRA73.989.681.481.993.585.595.687.186.1
MagnitudeLoRA72.287.278.981.292.283.293.082.483.8
TIESLoRA69.584.874.078.491.277.488.871.479.4
DARELoRA71.085.675.879.591.078.890.776.281.1
ConcatLoRI-D73.889.079.881.093.083.094.684.084.8
LinearLoRI-D74.188.480.281.392.982.194.183.684.6
ConcatLoRI-S70.387.279.180.892.482.193.281.383.3
LinearLoRI-S61.586.478.079.591.780.891.378.581.0
" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.101, + 0.828, + 0.173 + ], + "angle": 0, + "content": "Table 20: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank \\( r = 32 \\). Bold indicates the best-performing method within each group." + }, + { + "type": "table", + "bbox": [ + 0.203, + 0.183, + 0.8, + 0.365 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM&KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
MagnitudeLoRA81.950.324.136.742.474.4
MagnitudeLoRI-D84.350.533.345.251.485.9
MagnitudeLoRI-S76.435.225.236.541.068.4
TIESLoRA72.624.032.546.351.777.8
TIESLoRI-D79.138.040.354.659.885.3
TIESLoRI-S70.425.934.648.453.277.8
DARELoRA79.148.934.148.753.574.1
DARELoRI-D83.452.035.451.357.881.9
DARELoRI-S73.427.234.848.153.575.3
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.377, + 0.828, + 0.449 + ], + "angle": 0, + "content": "Table 21: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank \\( r = 32 \\). Bold indicates the best-performing method within each group." + }, + { + "type": "table", + "bbox": [ + 0.204, + 0.46, + 0.8, + 0.642 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
MagnitudeLoRA77.542.732.741.845.680.9
MagnitudeLoRI-D76.041.529.036.038.779.4
MagnitudeLoRI-S70.532.428.136.139.377.5
TIESLoRA31.323.532.040.243.581.9
TIESLoRI-D65.045.435.344.547.868.4
TIESLoRI-S67.832.928.637.240.878.4
DARELoRA76.143.032.041.044.683.4
DARELoRI-D76.242.329.237.140.789.1
DARELoRI-S71.934.329.240.544.985.0
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.666, + 0.828, + 0.751 + ], + "angle": 0, + "content": "(Tables 20 and 21) and three-adapter (Table 22) settings, using Llama-3 and Mistral as base models. LoRI-D consistently achieves strong performance across all pruning-based merging methods. However, the performance of LoRI-S is somewhat lower in these settings. This is because pruning-based methods operate on the dense \\( A \\) matrices but not on the sparse \\( B \\) matrices. This mismatch leads to an inconsistent pruning scheme, which can result in a loss of effectiveness." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.771, + 0.465, + 0.788 + ], + "angle": 0, + "content": "F Additional Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.803, + 0.827, + 0.877 + ], + "angle": 0, + "content": "Figure 5 presents GSM8K accuracy across a grid of sparsity ratios and learning rates using Mistral-7B with rank \\( r = 64 \\). We observe that sparse adapters require larger learning rates to train effectively. In particular, models with high sparsity (e.g., above \\( 70\\% \\)) perform best with a learning rate of \\( 10^{-4} \\) or higher. This suggests that stronger optimization is necessary to compensate for limited capacity in sparse adapters." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.882, + 0.826, + 0.927 + ], + "angle": 0, + "content": "In Figure 6, we analyze how sparsity is distributed across layers and projections when enforcing \\(90\\%\\) global sparsity on GSM8K. We find that feedforward (FFN) projections tend to retain more parameters – i.e., they exhibit lower sparsity – than self-attention projections." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.961 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.101, + 0.827, + 0.173 + ], + "angle": 0, + "content": "Table 22: Comparison of magnitude pruning, TIES, and DARE for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank \\( r = 32 \\). Bold indicates the best-performing method within each group." + }, + { + "type": "table", + "bbox": [ + 0.242, + 0.183, + 0.761, + 0.365 + ], + "angle": 0, + "content": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
MagnitudeLoRA83.852.023.337.443.0
MagnitudeLoRI-D84.653.734.848.954.7
MagnitudeLoRI-S77.836.625.538.843.8
TIESLoRA79.426.936.348.753.7
TIESLoRI-D82.142.239.252.757.7
TIESLoRI-S73.835.234.847.952.5
DARELoRA81.153.336.049.553.9
DARELoRI-D84.055.233.845.851.8
DARELoRI-S75.336.636.248.953.4
" + }, + { + "type": "image", + "bbox": [ + 0.26, + 0.382, + 0.72, + 0.6 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.614, + 0.825, + 0.645 + ], + "angle": 0, + "content": "Figure 5: GSM8K accuracy under different sparsity ratios and learning rates. Base model: Mistral-7B, rank \\( r = 64 \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.67, + 0.825, + 0.715 + ], + "angle": 0, + "content": "This indicates that FFN components are more critical for effective adaptation. Additionally, sparsity decreases toward the top of the network, suggesting that higher layers are more important for task-specific specialization." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.719, + 0.827, + 0.792 + ], + "angle": 0, + "content": "Lastly, Figure 7 explores the effect of merging weights when combining three LoRI-S adapters using concatenated and linear merging. We find a noticeable trade-off between performance on code tasks and other domains (e.g., NLU and math). Higher merging weights can improve NLU performance but tend to degrade performance on code, highlighting the challenge of balancing generalization and specialization in multi-task settings." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.182, + 0.662, + 0.43 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.442, + 0.825, + 0.471 + ], + "angle": 0, + "content": "Figure 6: Sparsity ratios across layers and projections under a \\(90\\%\\) sparsity on GSM8K. Base model: Llama-3-8B, rank \\(r = 32\\)." + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.639, + 0.495, + 0.78 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.218, + 0.788, + 0.466, + 0.803 + ], + "angle": 0, + "content": "(a) Concatnated merging with LoRI-S." + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.639, + 0.811, + 0.78 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.551, + 0.788, + 0.761, + 0.803 + ], + "angle": 0, + "content": "(b) Linear merging with LoRI-S." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.813, + 0.825, + 0.843 + ], + "angle": 0, + "content": "Figure 7: Ablation study on the effect of merging weights when combining three adapters. Base model: Llama-3-8B, rank \\( r = 32 \\)." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "26" + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_origin.pdf b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..23538b2e56f6b27f26169be4b2c422d3b5ca7ea2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/45bd5bd8-55af-45e5-b183-b2d70c8be5c1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d62473a8911fc169347961e2c04286cb4bb0938b430ba17879742feb251d7644 +size 644194 diff --git a/data/2025/2504_07xxx/2504.07448/full.md b/data/2025/2504_07xxx/2504.07448/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b16c1a5b7291017ea354a730e91e3d60008ec38f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/full.md @@ -0,0 +1,508 @@ +# LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation + +Juzheng Zhang $^{1}$ , Jiacheng You $^{2}$ , Ashwinee Panda $^{1}$ , Tom Goldstein $^{1}$ + +$^{1}$ University of Maryland $^{2}$ Tsinghua University + +# Abstract + +Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to $95\%$ fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI. + +# 1 Introduction + +Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2023) have transformed deep learning, showcasing remarkable capabilities across various domains. However, their deployment remains computationally demanding, particularly when fine-tuning is required to adapt to downstream tasks or align with human preferences. To mitigate the high resource costs, researchers have developed a range of parameter-efficient fine-tuning (PEFT) techniques. Among these techniques, LoRA (Hu et al., 2021) has gained widespread adoption due to its compelling balance of performance and efficiency. Nevertheless, LoRA still introduces notable memory overhead, particularly in large-scale models. Consequently, recent research has focused on further optimizing LoRA by reducing the number of trainable parameters without compromising performance (Kopiczko et al., 2023; Ding et al., 2023; Zhang et al., 2023b). + +Recent studies (Yu et al., 2024; Panda et al., 2024) have shown that delta parameters – the differences between fine-tuned and pretrained model weights – exhibit significant redundancy. Furthermore, previous works (Zhang et al., 2023b; Zhu et al., 2024) have observed that freezing matrices $A$ in LoRA often achieves comparable performance to training them. Motivated by these findings, we propose LoRA with Reduced Interference (LoRI). LoRI keeps matrices $A$ fixed as random projections, while training matrices $B$ using task-specific sparse masks. To retain the most critical elements of $B$ , LoRI performs a calibration process to extract sparse masks by selecting the highest-magnitude elements across all layers and projections. As shown in Figure 1(a), LoRI maintains performance even with $90\%$ sparsity in $B$ while keeping $A$ frozen. This demonstrates that adaptation does not require updating $A$ , and that $B$ has considerable redundancy. By applying more constrained updates than LoRA, LoRI significantly reduces the number of trainable parameters while better preserving the pretrained model's knowledge during adaptation. + +![](images/007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg) +Figure 1: (a) Varying sparsity ratios in matrices $B$ while freezing $A$ . Performance remains stable even at $90\%$ sparsity in matrices $B$ . (b) Merging three adapters via weighted averaging. LoRA suffers degradation due to parameter interference, while LoRI preserves task performance. (c) Continual learning from Safety to NLU. LoRA suffers from catastrophic forgetting, while LoRI retains safety alignment. Results for NLU are averaged over eight tasks. GSM8K accuracy (Math), HumanEval pass@10 (Code), and HEx-PHI refusal rate (Safety) are reported individually. Base model: Llama-3-8B, rank $r = 32$ . + +Multi-task learning is essential for enabling versatile models with multi-task capabilities, which is traditionally performed via joint training on a combination of task-specific datasets (Caruana, 1997; Sener & Koltun, 2018). However, training large models on this data mixture is prohibitively expensive in terms of time and compute. Model merging is a training-free alternative for building powerful models by combining existing ones (Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). This approach is well-suited for merging LoRA adapters, enabling multi-task capabilities within a single model during inference (Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024). However, as shown in Figure 1(b), directly merging heterogeneous LoRAs often results in parameter interference, leading to degraded performance compared to single-task LoRAs. Additionally, many existing merging methods require trial-and-error to identify the optimal method for a specific combination of tasks. LoRI addresses these challenges by using fixed, randomly initialized projection $A$ , which maps task-specific adapters into approximately orthogonal subspaces. This reduces interference when merging multiple adapters. In addition, LoRI enables adapter merging without manual selection of merging methods. + +Beyond multi-tasking, safety-critical scenarios require that each newly introduced adapter enhances model capabilities while preserving the safety alignment of the pretrained base model (Qi et al., 2023). LoRI provides a lightweight continual learning approach for adapting models while preserving safety, where training is performed sequentially across tasks (Lopez-Paz & Ranzato, 2017; Wu et al., 2022; Ouyang et al., 2022). The strategy involves first fine-tuning an adapter on safety data to establish alignment, followed by separate adaptation to each downstream task. However, as illustrated in Figure 1(c), continual learning often leads to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), wherein the adaptation to new tasks substantially compromises previously acquired knowledge. LoRI mitigates forgetting by leveraging the sparsity of projection $B$ through task-specific masks. This isolation of parameter updates across tasks facilitates continual learning with minimal interference, preserving both safety and task effectiveness. + +To evaluate the effectiveness of LoRI, we conduct extensive experiments across a diverse suite of benchmarks spanning natural language understanding (NLU), mathematical reasoning, code generation, and safety alignment tasks. Using Llama-3-8B and Mistral-7B as base models, our results show that LoRI achieves performance comparable to - or better than - full fine-tuning (FFT), LoRA, and other PEFT methods, while using up to $95\%$ fewer trainable parameters than LoRA. Notably, LoRI with $90\%$ sparsity in $B$ surpasses LoRA by $17.3\%$ on HumanEval with Llama-3. Beyond single-task adaptation, we evaluate LoRI in multi-task settings, including adapter merging and continual learning scenarios. Concatenated merging of LoRI adapters consistently outperforms LoRA adapters overall, closely matching the performance of single-task LoRA baseline. In continual learning, LoRI significantly outperforms LoRA in mitigating catastrophic forgetting of safety alignment, while maintaining strong performance on downstream tasks. + +![](images/99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg) +(a) LoRI method. + +![](images/2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg) +(b) LoRI merging. + +![](images/621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg) +(c) LoRI continual learning. +Figure 2: Overview of the proposed LoRI method. (a) LoRI freezes the projection matrices $A_{t}$ and sparsely updates $B_{t}$ using task-specific masks $M_{t}$ . (b) LoRI enables adapter merging of multiple task-specific adapters with reduced parameter interference. (c) LoRI builds safety adapters by continual learning with reduced catastrophic forgetting. + +# 2 Method + +# 2.1 Freezing Low-Rank Projections with Sparse Masking + +Freezing Projection $A$ . LoRA (Hu et al., 2021) fine-tunes a weight update matrix as a product of two low-rank matrices to adapt LLMs to new tasks. Formally, for a specific task $t$ , given a pretrained weight matrix $W_0 \in \mathbb{R}^{d_{\mathrm{in}} \times d_{\mathrm{out}}}$ , the weight update $\Delta_t \in \mathbb{R}^{d_{\mathrm{in}} \times d_{\mathrm{out}}}$ is constrained to a low-rank decomposition: + +$$ +h = x W _ {0} + x \Delta_ {t} = x W _ {0} + x A _ {t} B _ {t}. \tag {1} +$$ + +where $A_{t} \in \mathbb{R}^{d_{\mathrm{in}} \times r}$ , $B_{t} \in \mathbb{R}^{r \times d_{\mathrm{out}}}$ , and $r \ll \min\{d_{\mathrm{in}}, d_{\mathrm{out}}\}$ . We denote $\Delta_t$ as the LoRA adapter for task $t$ . In practice, LoRA adapters are typically applied to multiple projection matrices (e.g., $W_q, W_v$ ) within each transformer layer. + +Typically, the low-rank projection matrices $A_{t}$ and the low-rank expansion matrices $B_{t}$ are updated via gradient descent. Matrices $A_{t}$ are usually initialized with Kaiming Uniform distribution (He et al., 2015), while matrices $B_{t}$ are initialized to zero, ensuring that $\Delta_{t} = 0$ at the start of training. However, in LoRI, we fix $A_{t}$ as random projections, meaning that the model only learns how to combine the fixed subspace via $B_{t}$ . By freezing $A_{t}$ , we eliminate the need to store their gradients and optimizer states, thereby reducing memory consumption. During inference, similar to LoRA, LoRI merges the low-rank updates by adding $A_{t}B_{t}$ to $W_{0}$ , ensuring no additional inference latency compared to full fine-tuning. + +Sparse Masking for Projection $B$ . LoRI freezes matrices $A_{t}$ and selectively updates only the most relevant parameters in $B_{t}$ for each task, as illustrated in Figure 2(a). For task $t$ , it first extracts sparse masks $M_{t}$ through a calibration process, then applies the masks to constrain training to a limited subset of parameters in $B_{t}$ . During mask calibration, LoRI updates $B_{t}$ without masking using a calibration dataset $\mathcal{D}_t^C$ , sampled from the adaptation dataset $\mathcal{D}_t$ . After this phase, LoRI collects all $B_{t}$ matrices from the model across layers and projections. Then it computes a global threshold $\tau_t$ , defined as the $s\%$ quantile of the absolute values of all elements from these matrices, where $s$ is the sparsity ratio. For each matrix $B_{t}$ , the corresponding sparse mask $M_{t}$ is computed as: + +$$ +M _ {t} = \mathbb {I} \left(\left| B _ {t} \right| \geq \tau_ {t}\right), \quad \text {w h e r e} \quad \tau_ {t} = \operatorname {Q u a n t i l e} _ {s} \left(\bigcup \left| B _ {t} \right|\right). \tag {2} +$$ + +Here, $\mathbb{I}(\cdot)$ denotes the indicator function applied element-wise. This ensures that only the top- $(1 - s)\%$ of parameters (by magnitude) across all layers and projections are retained. The masks can also be derived using gradient-based measures such as the Fisher information matrix (Guo et al., 2023; Iurada et al., 2025) or SNIP score (Lee et al., 2018). However, these methods capture local sensitivity at a specific training step, whereas magnitude reflects cumulative importance over the entire fine-tuning process. + +It is well established that the importance of projection matrices varies significantly across different layers and projections (Zhang et al., 2023a;d; Kopiczko et al., 2023). Our masking strategy enables global comparison of parameters and facilitates effective allocation of the parameter budget determined by the sparsity ratio. Notably, the masks for each task $t$ are calibrated only once and can be reused as needed. + +After mask calibration, LoRI resets $B_{t}$ to zero and trains on the adaptation dataset $\mathcal{D}_t$ , with updates restricted to the masked parameters. The LoRI adapter is expressed as $\Delta_t = A_t(B_t \odot M_t)$ . The algorithm of LoRI is detailed in Appendix B. In practice, the sparsity ratio $s$ can reach up to 90%, meaning that only a small fraction of parameters in matrices $B_{t}$ are updated, while the majority remain unchanged. This selective adaptation enables the model to focus on modifying the most critical parameters needed for specific tasks, while preserving the foundational knowledge encoded in the pretrained base model. In the limiting case of a single task and zero sparsity, our method reduces to LoRA-FA (Zhang et al., 2023b), which has been shown to perform competitively with standard LoRA. + +# 2.2 Reducing Interference in Adapter Merging via Orthogonality + +Orthogonality of LoRI Adapters. A central challenge in adapter merging is parameter interference, where combining multiple adapters leads to degraded performance due to conflicting parameter updates. Given a set of trained LoRI adapters $\{\Delta_1,\Delta_2,\dots ,\Delta_T\}$ , the goal is to construct a unified model that combines knowledge from all tasks with minimal interference, as illustrated in Figure 2(b). Formally, we define the excess loss due to parameter interference for a specific task $t$ as: + +$$ +\mathcal {I} _ {t} = \mathcal {L} _ {t} \left(W _ {\text {m e r g e}}\right) - \mathcal {L} _ {t} \left(W _ {0} + \alpha_ {t} \Delta_ {t}\right), \tag {3} +$$ + +where $W_{\mathrm{merge}}$ is the merged model, $W_0$ is the pretrained weight matrix, $\Delta_t$ is the LoRI adapter for task $t$ , $\alpha_t \in \mathbb{R}$ is a scalar weight, and $\mathcal{L}_t$ is the loss function for task $t$ . A high $\mathcal{L}_t$ indicates significant interference. + +LoRI mitigates this interference by leveraging approximate orthogonality, achieved by freezing the projection matrices $A_{t}$ as independent random matrices. This design leads to the following property, whose proof is provided in Appendix C: + +Property 1. Let $A_s, A_t \in \mathbb{R}^{d_{in} \times r}$ be independent random matrices with i.i.d. entries drawn from a Kaiming Uniform distribution for distinct tasks $s \neq t$ . Let their corresponding LoRI adapters be $\Delta_s = A_s(B_s \odot M_s)$ and $\Delta_t = A_t(B_t \odot M_t)$ , where the trained matrices $(B_s \odot M_s)$ and $(B_t \odot M_t)$ have finite Frobenius norms. Under the condition that $r \ll d_{in}$ , as the input dimension $d_{in} \to \infty$ , the adapters are approximately orthogonal: + +$$ +\left\langle \Delta_ {s}, \Delta_ {t} \right\rangle_ {F} \rightarrow 0 \quad i n p r o b a b i l i t y. \tag {4} +$$ + +We describe two merging methods: concatenated merging (weighted averaging) and linear merging (Task Arithmetic) (Ilharco et al., 2022), both of which exploit the approximate orthogonality of LoRIs. + +Concatenated Merging (Weighted Averaging). This method constructs the merged model by creating a weighted sum of individual task adapters. This is achieved by concatenating the weighted $A$ and masked $B$ matrices: + +$$ +A ^ {\prime} = \left[ \alpha_ {1} A _ {1} \alpha_ {2} A _ {2} \dots \alpha_ {T} A _ {T} \right], \quad B ^ {\prime} = \left[ \left(B _ {1} \odot M _ {1}\right) ^ {\top}, \dots , \left(B _ {T} \odot M _ {T}\right) ^ {\top} \right] ^ {\top}, \tag {5} +$$ + +where $\alpha_{t} \in \mathbb{R}$ are scalar weights (e.g., uniform or task-prioritized). The final merged model is then formed by adding their product to the base model weights: + +$$ +W _ {\text {m e r g e}} = W _ {0} + A ^ {\prime} B ^ {\prime} = W _ {0} + \sum_ {t = 1} ^ {T} \alpha_ {t} A _ {t} \left(B _ {t} \odot M _ {t}\right) = W _ {0} + \sum_ {t = 1} ^ {T} \alpha_ {t} \Delta_ {t}. \tag {6} +$$ + +By summing approximately orthogonal adapters, we ensure that the updates for each task occupy largely disjoint subspaces, thereby reducing interference (Ilharco et al., 2022; OrtizJimenez et al., 2023; Xiong et al., 2024). + +The reduction in interference can be explained by a theoretical sketch based on two key assumptions. The first is the local linearity of the loss landscape (Li et al., 2018), which allows for a first-order Taylor approximation. The second is the gradient alignment assumption, formally expressed as $\nabla \mathcal{L}_t(W_0 + \alpha_t\Delta_t)\propto \Delta_t$ . This posits that at a task's solution, the direction of steepest descent is primarily aligned with the adapter updates already made for that task. Under these assumptions, the excess loss $\mathcal{I}_t$ is approximately the inner product of the gradient and the updates from the other tasks: + +$$ +\mathcal {I} _ {t} \approx \left\langle \nabla \mathcal {L} _ {t} \left(W _ {0} + \alpha_ {t} \Delta_ {t}\right), \sum_ {s \neq t} \alpha_ {s} \Delta_ {s} \right\rangle_ {F} \propto \sum_ {s \neq t} \alpha_ {k} \left\langle \Delta_ {t}, \Delta_ {s} \right\rangle_ {F}. \tag {7} +$$ + +Since Property 1 establishes that $\langle \Delta_t, \Delta_s \rangle_F \to 0$ for $s \neq t$ , the total interference loss becomes negligible: $\mathcal{I}_t \approx 0$ . This heuristic argument provides strong intuition for why concatenated merging is effective, which is then validated by our empirical results. + +Linear Merging (Task Arithmetic). Alternatively, the merged model can be formed by summing the $A_{t}$ and masked $B_{t}$ matrices independently before multiplication: + +$$ +W _ {\text {m e r g e}} = W _ {0} + \left(\sum_ {t = 1} ^ {T} \alpha_ {t} A _ {t}\right) \left(\sum_ {t = 1} ^ {T} \alpha_ {t} \left(B _ {t} \odot M _ {t}\right)\right) = W _ {0} + \sum_ {s = 1} ^ {T} \sum_ {t = 1} ^ {T} \alpha_ {s} \alpha_ {t} A _ {s} \left(B _ {t} \odot M _ {t}\right). \tag {8} +$$ + +While concatenated merging directly sums approximately orthogonal adapters, this linear merging approach introduces problematic cross-terms $\alpha_{s}\alpha_{t}A_{s}(B_{t}\odot M_{t})$ for $s\neq t$ . These terms cause interference because components like $\{A_s(B_t\odot M_t)\}_{t = 1}^T$ for a fixed $s$ are generally not mutually orthogonal. As a result, concatenated merging offers a cleaner and empirically more effective strategy for combining LoRI adapters. + +# 2.3 Reducing Interference in Continual Learning via Sparsity + +Safety-Preserving Adapters. For safety-critical applications, ensuring that new task adaptations do not compromise established safety behaviors is crucial. Therefore, each newly introduced adapter must preserve the base model's safety alignment. A straightforward approach to achieve this is to merge a safety LoRI adapter into the deployed model during every inference. However, as we will show in Section 3.4, this method may be insufficient for scenarios that demand strong safety guarantees. In such cases, as illustrated in Figure 2(c), a more reliable solution is to adopt a two-phase continual learning process for each LoRI adapter to reinforce safety: + +1. Safety Alignment Phase: Train a LoRI adapter on a curated safety dataset $\mathcal{D}_{\text{safety}}$ , yielding $\Delta_{\text{safety}} = A(B_{\text{safety}} \odot M_{\text{safety}})$ . +2. Task Adaptation Phase: Fine-tune $\Delta_{\mathrm{safety}}$ on each task adaptation dataset $D_t, t = 1, 2, \ldots, T$ , reusing the calibrated task-specific masks $M_t$ , resulting in safety-preserving adapters $\Delta_t = A(B_t \odot M_t)$ . + +This method does not require recalibrating masks for each task or performing multiple rounds of continual learning. Notably, we do not enforce non-overlapping masks $M_t \cap M_{\text{safety}} = \emptyset$ . Enforcing such a constraint would require recalibrating masks after the safety alignment phase due to the reduced parameter space, and could potentially degrade performance on downstream tasks. The expected overlap between sparse masks with $90\%$ sparsity is theoretically $1\%$ . Empirically, we find that this expectation holds: the average overlap between task-specific masks is indeed $\sim 1\%$ , without explicitly enforcing non-overlap. This slight overlap allows important parameters to be shared across tasks, potentially enabling positive knowledge transfer. + +Catastrophic Forgetting. Continual learning models are vulnerable to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), where updates for new tasks can overwrite and degrade previously learned knowledge. Despite the slight overlap between + +task-specific masks, the sparsity in $B_{t}$ induced by $M_{t}$ enables LoRI to facilitate isolated parameter updates for safety alignment and task adaptation. As a result, LoRI minimizes cross-task interference and mitigates catastrophic forgetting in safety alignment. + +# 3 Experiments + +# 3.1 Experimental Setup + +Datasets. We conduct a series of experiments to evaluate LoRI's effectiveness on single-task and multi-task settings, including adapter merging and continual learning. We focus on four capabilities: (i) Natural Language Understanding (NLU): LoRI is trained on the aggregation of eight NLU datasets (Hu et al., 2023), including BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), ARC-Challenge (Clark et al., 2018), ARC-Easy (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), and Winogrande (Sakaguchi et al., 2021). We evaluate accuracy on the individual test split for each dataset. (ii) Mathematical Reasoning (Math): LoRI is trained on the GSM8K (Cobbe et al., 2021) training split and evaluated on the GSM8K test split. (iii) Code Generation (Code): LoRI is trained on CodeAlpaca (Chaudhary, 2023) and evaluated using pass@1, pass@5, and pass@10 on HumanEval (Chen et al., 2021). (iv) Safety Alignment (Safety): LoRI is trained on Saferpaca (Bianchi et al., 2023), which extends Alpaca-Cleaned (Taori et al., 2023) with 2,000 safety instructions. Safety performance is assessed by measuring the refusal rate on harmful queries from HEX-PHI (Qi et al., 2023). + +Baselines. In single-task experiments, we compare LoRI with full fine-tuning (FFT), LoRA (Hu et al., 2021), and DoRA (Liu et al., 2024). Results for additional PEFT baselines, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), and LoRA+ (Hayou et al., 2024), are available in Appendix E.1. In merging experiments, we compare LoRI merging with several LoRA merging methods, including concatenated merging, linear merging (Ilharco et al., 2022), magnitude pruning, TIES-Merging (Yadav et al., 2023), and DARE (Yu et al., 2024). Magnitude pruning, TIES, and DARE are pruning-based approaches that apply sparsification to the $A$ and $B$ matrices before merging, based on a specified density. Magnitude pruning removes low-magnitude parameters; TIES-Merging further merges weights with consistent signs; and DARE performs random pruning followed by rescaling. For fair comparison, all baseline results are reproduced using a consistent experimental setup. + +Implementation Details. We use Llama-3-8B (Grattafori et al., 2024) and Mistral7B (Jiang et al., 2023) as base models. We conduct all experiments on 8 NVIDIA A5000 GPUs. To explore the impact of sparsity, we provide two variants of LoRI: LoRI-D, which uses dense $B$ matrices, and LoRI-S, which applies $90\%$ sparsity to $B$ . Sparsity is implemented by masking the gradients of $B$ during backpropagation. For optimal performance, we use the entire adaptation dataset as the calibration dataset for each task. Ablation results for calibration are presented in Section 3.5. For consistency, we use the same hyperparameters for PEFT baselines as for LoRI-D. For all adapter merging experiments, uniform weights $\alpha_{t}$ are employed across all adapters. The weights $\alpha_{t}$ are treated as hyperparameters, and their ablation study is detailed in Section 3.5. Detailed hyperparameter settings are provided in Appendix D. + +# 3.2 Single-Task Performance + +Table 1 presents single-task performance on eight NLU benchmarks, while Table 2 reports single-task performance on the math, code, and safety benchmarks. Results for additional PEFT baselines are available in Appendix E.1. The rank for our experiments is set to $r = 32$ . We observed stable performance across different ranks, with additional results for $r = 64$ provided in Appendix E.2. + +Table 1: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 and Mistral with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA84M (1.15%)75.290.182.982.992.088.795.188.186.9
DoRA85M (1.16%)75.890.482.983.392.690.696.387.987.5
LoRI-D44M (0.60%)75.990.683.083.691.988.495.987.487.1
LoRI-S4.4M (0.06%)74.090.182.682.691.590.895.587.586.8
+ +Table 2: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best. + +
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
Mistral-7B
FFT7.24G (100%)55.529.138.540.494.1
LoRA84M (1.15%)57.833.842.445.391.9
DoRA85M (1.16%)57.533.742.646.895.3
LoRI-D44M (0.60%)58.033.842.045.194.7
LoRI-S4.4M (0.06%)57.133.743.648.195.9
+ +While full fine-tuning (FFT) updates all model parameters, LoRA and DoRA reduce the number of trainable parameters to approximately $1\%$ . LoRI-D further reduces this to about $0.5\%$ by freezing matrices $A$ , and LoRI-S pushes this reduction to $0.05\%$ by applying $90\%$ sparsity to matrices $B$ , achieving a $95\%$ reduction in trainable parameters compared to LoRA. Despite tuning fewer parameters, LoRI-D and LoRI-S achieve performance comparable to - and even better than - LoRA and DoRA on NLU, math, code, and safety tasks. LoRI-D generally outperforms LoRI-S slightly, due to the extremely limited parameter budget in LoRI-S. Remarkably, LoRI-D and LoRI-S consistently outperform FFT, LoRA, and DoRA on code generation tasks. On HumanEval with Llama-3, LoRI-D achieves a pass@10 score of $63.2\%$ , outperforming LoRA by $24.4\%$ . LoRI-S achieves $59.6\%$ pass@10, exceeding LoRA by $17.3\%$ . + +The strong performance of LoRI-D suggests that effective adaptation can be achieved without updating $A$ , while the strong performance of LoRI-S indicates that $B$ contains substantial parameter redundancy. LoRI's performance gains are attributed to the principled use of sparsity, which serves as a strong regularizer during adaptation. Additionally, LoRI preserves latent task-specific knowledge embedded in the pretrained model. This supports the view that supervised fine-tuning (SFT) primarily unlocks capabilities already present in pretrained models, rather than introducing new ones, which is consistent with findings from Liu et al. (2024); Yu et al. (2024). + +Table 3: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
ConcatLoRA85.057.813.020.022.384.4
LinearLoRA84.854.114.220.823.379.4
MagnitudeLoRA81.950.324.136.742.474.4
TIESLoRA72.624.032.546.351.777.8
DARELoRA79.148.934.148.753.574.1
ConcatLoRI-D83.255.840.556.962.286.6
LinearLoRI-D82.553.840.954.960.385.9
ConcatLoRI-S81.245.234.348.754.084.7
LinearLoRI-S79.141.323.236.642.378.8
+ +# 3.3 Adapter Merging + +We consider four heterogeneous tasks for LoRA and LoRI merging: NLU, math, code, and safety. This setting is generally more challenging than merging homogeneous adapters, such as merging multiple NLU adapters. Table 3 presents results for merging LoRAs and LoRIs on these four tasks. For LoRI, we apply concatenated and linear merging to the LoRI-D and LoRI-S variants. Pruning-based methods such as magnitude pruning, TIES, and DARE are not applied to LoRI, since these methods will prune the $A$ matrices as LoRI already sparsifies $B$ , resulting in an inconsistent pruning scheme across $A$ and $B$ . Additional results, including experiments on merging three adapters and evaluations of pruning-based methods on LoRI, are provided in Appendix E.4 and E.5. + +As shown in Table 3, directly merging LoRAs results in substantial performance degradation, particularly for code generation and safety alignment. Although pruning-based methods (e.g., DARE, TIES) improve code performance, they often compromise accuracy on other tasks. In contrast, LoRI achieves consistently strong performance across all tasks. + +Concatenated merging with LoRI-D achieves the best overall performance, closely matching the single-task baseline, which indicates minimal interference between LoRI adapters. For instance, it achieves $62.2\%$ pass@10 on HumanEval and an $86.6\%$ refusal rate on HExPHI. Despite using only $5\%$ of the parameters of LoRA, LoRI-S retains competitive performance. Notably, on code and safety tasks, concatenated merging with LoRI-S outperforms all LoRA merging methods. + +Linear merging with LoRI also performs competitively, though it lags slightly behind concatenated merging due to cross-term interactions that introduce some interference. LoRI eliminates the need for manual selection of merging methods: simple concatenated merging yields strong results. The choice between LoRI-D and LoRI-S can then be guided by the desired trade-off between performance and parameter efficiency. We also note an important trade-off between code generation performance and other domains during adapter merging, a phenomenon further explored in Section 3.5. + +# 3.4 Continual Learning + +While merging adapters enables multi-task capabilities, it falls short of providing robust safety alignment in scenarios that demand strong safety guarantees. As shown in Table 3, the highest refusal rate on HEx-PHI achieved through LoRA or LoRI merging is $86.6\%$ . To address this limitation, we adopt a two-phase training process: first, a safety adapter is trained on the safety alignment dataset Saerpaca; then, it is individually adapted to each downstream task, including NLU, math, and code. + +![](images/8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg) +Figure 3: Continual learning results from safety to NLU, math, and code domains. Results for NLU are averaged over eight tasks. GSM8K accuracy, HumanEval pass@10, and HEX-PHI refusal rate are reported individually. Base model: Llama-3-8B, rank $r = 32$ . + +![](images/162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg) + +![](images/a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg) + +![](images/4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg) +(a) Effect of calibration steps. +(c) Effect of mask granularities. +Figure 4: Ablation studies across different settings. Base model: Llama-3-8B, rank $r = 32$ . Additional ablation studies are provided in Appendix F. + +![](images/31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg) +(b) Sparsity ratios across layers and projections. +(d) Effect of merging weights. + +Figure 3 presents results from these continual learning experiments. LoRA exhibits severe catastrophic forgetting on safety alignment – particularly in the safety $\rightarrow$ NLU experiment – likely due to the large size of the NLU training split ( $\sim 170\mathrm{k}$ examples). Among all methods, LoRI-S achieves the best preservation of safety alignment, even outperforming single-task LoRI-D. This is due to its $90\%$ sparsity in the $B$ matrices, which enables isolated parameter updates between the initial safety alignment and subsequent task adaptations. LoRI-D also shows some resistance to forgetting, benefiting from frozen $A$ matrices. For task adaptation, LoRI-D generally outperforms LoRI-S, as the latter's aggressive sparsity limits its adaptation capacity. Overall, LoRI offers a lightweight and effective approach to building safety adapters that preserve alignment while supporting adaptation to downstream tasks. + +# 3.5 Ablation Studies + +Calibration Steps. Calibration steps refer to the number of update steps used to generate sparse masks for each task. Figure 4(a) shows how performance of LoRI-S changes with + +different numbers of calibration steps on math and code tasks. We observe that performance generally improves as the number of calibration steps increases. Since the masks only need to be calibrated once per task and can be reused, we use the entire adaptation dataset as the calibration dataset to achieve the best performance. + +Sparsity Ratio. We use model-wise masks in our experiments that retain the highest-magnitude parameters across all layers and projections. Figure 4(b) presents the sparsity ratios of different projection types (e.g., up, down, key, value) across layers under a $90\%$ sparsity on GSM8K. We observe that feedforward (FFN) projections tend to retain more parameters (i.e., lower sparsity) than self-attention projections, indicating they are more critical for adaptation. Additionally, the top layers are less sparse than lower layers, suggesting that the top layers play a more important role in adaptation. + +Mask Granularity. We compare five levels of mask granularity under $90\%$ sparsity on GSM8K, as shown in Figure 4(c). We compare module-wise, projection-wise, layer-wise, and matrix-wise masking against our model-wise masking, where parameters are selected within progressively smaller scopes. We find that coarse-grained masking (e.g., model-wise) yields the best performance, while fine-grained masking (e.g., matrix-wise) results in degradation. This suggests that global magnitude-based selection enables better parameter allocation, as the importance of projection matrices varies across the model. + +Merging Weights. We adopt uniform weights across all adapters for adapter merging, rather than task-specific weights, as we do not wish to prioritize any individual task. Figure 4(d) shows the effect of different merging weights (0.2, 0.3, 0.4) for concatenated merging with LoRI-S. We observe that LoRI is moderately sensitive to merging weights, with a noticeable trade-off between performance on code tasks and other domains. We adopt 0.3 for all adapters in LoRI-S merging, as it offers a balanced performance across domains. + +# 4 Conclusion + +In this work, we introduced LoRI, a simple yet effective approach to parameter-efficient fine-tuning (PEFT) that substantially reduces trainable parameters while minimizing cross-task interference. By freezing the projection matrices $A$ as random projections and sparsifying $B$ using task-specific masks, LoRI achieves strong single-task performance across diverse domains – including natural language understanding, mathematical reasoning, code generation, and safety alignment – while reducing trainable parameters by up to $95\%$ compared to LoRA. Furthermore, LoRI enables training-free adapter merging with minimal performance degradation, and supports continual learning with significantly reduced catastrophic forgetting. It also provides a lightweight approach to building safety adapters that preserve the safety alignment of the base model. + +Future Work. We identify several promising avenues for extending this work. While LoRI currently leverages unstructured magnitude-based sparsity, future research can explore structured sparsity patterns – such as block sparsity, head pruning, or group-wise masking – which may offer better hardware compatibility. Additionally, although this study focuses on LLMs, the core design of LoRI is modality-agnostic. Extending LoRI to diffusion and vision-language models for multi-modal generation is a promising direction. + +# Acknowledgements + +This material is based upon work partially supported by the NSF Grant No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. + +# References + +Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions. arXiv preprint arXiv:2309.07875, 2023. +Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020. +Rich Caruana. Multitask learning. Machine learning, 28:41-75, 1997. +Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation, 2023. +Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. +Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023. +Alexandra Chronopoulou, Matthew E Peters, Alexander Fraser, and Jesse Dodge. *Adaptersoup: Weight averaging to improve generalization of pretrained language models.* arXiv preprint arXiv:2302.07027, 2023. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019. +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. +Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696, 2023. +Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition. arXiv preprint arXiv:2310.05492, 2023. +Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, and Jia Li. Parameter-efficient fine-tuning with discrete fourier transform. arXiv preprint arXiv:2405.03003, 2024. +Aaron Grattaftori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. +Han Guo, Philip Greengard, Eric P Xing, and Yoon Kim. Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning. arXiv preprint arXiv:2311.12023, 2023. + +Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models. arXiv preprint arXiv:2402.12354, 2024. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Larous-silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790-2799. PMLR, 2019. +Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. +Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka-Wei Lee. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933, 2023. +Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269, 2023. +Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv preprint arXiv:2212.04089, 2022. +Leonardo Iurada, Marco Ciccone, and Tatiana Tommasi. Efficient model editing with task-localized sparse fine-tuning. arXiv preprint arXiv:2504.02620, 2025. +Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. +Damjan Kalajdzievski. A rank stabilization scaling factor for fine-tuning with lora. arXiv preprint arXiv:2312.03732, 2023. +Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. Parameter-level soft-masking for continual learning. In International Conference on Machine Learning, pp. 17492-17505. PMLR, 2023. +Dawid J Kopiczko, Tijmen Blankevoort, and Yuki M Asano. Vera: Vector-based random matrix adaptation. arXiv preprint arXiv:2310.11454, 2023. +Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018. +Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. +Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018. +Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021. +Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017. +Zujie Liang, Feng Wei, Yin Jie, Yuxi Qian, Zhenghong Hao, and Bing Han. Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 277-292, 2023. + +Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, 35:1950-1965, 2022. +Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. In *Forty-first International Conference on Machine Learning*, 2024. +Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021. +David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017. +Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023. +Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7765-7773, 2018. +Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems, 35:17703-17716, 2022. +Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989. +Fanxu Meng, Zhaohui Wang, and Muhan Zhang. Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37:121038-121072, 2024. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018. +Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 36:66727-66754, 2023. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. +Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, and Pra-tek Mittal. Lottery ticket adaptation: Mitigating destructive interference in llms. arXiv preprint arXiv:2406.16797, 2024. +Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020. +Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, and Samy Jelassi. Lora soups: Merging loras for practical skill composition tasks. arXiv preprint arXiv:2410.13025, 2024. +Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693, 2023. + +Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic forgetting in neural networks. In International Conference on Learning Representations, 2021. +David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019. +Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9): 99-106, 2021. +Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019. +Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018. +Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017. +George Stoica, Pratik Ramesh, Boglarka Ecsedi, Leshem Choshen, and Judy Hoffman. Model merging with svd to tie the knots. arXiv preprint arXiv:2410.19735, 2024. +Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. +Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, and Cheng-Zhong Xu. Hydralora: An asymmetric lora architecture for efficient fine-tuning. Advances in Neural Information Processing Systems, 37:9565-9584, 2024. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. +Hanqing Wang, Bowen Ping, Shuo Wang, Xu Han, Yun Chen, Zhiyuan Liu, and Maosong Sun. Lora-flow: Dynamic lora fusion for large language models in generative tasks. arXiv preprint arXiv:2402.11455, 2024a. +Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024b. +Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023. +Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan Fang Li, Guilin Qi, and Gholamreza Haffari. Pretrained language model in continual learning: A comparative study. In International Conference on Learning Representations 2022. OpenReview, 2022. +Xun Wu, Shaohan Huang, and Furu Wei. Mixture of lora experts. arXiv preprint arXiv:2404.13628, 2024. +Feng Xiong, Runxi Cheng, Wang Chen, Zhanqiu Zhang, Yiwen Guo, Chun Yuan, and Ruifeng Xu. Multi-task model merging via adaptive weight disentanglement. arXiv preprint arXiv:2411.18729, 2024. + +Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36:7093-7115, 2023. +Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*, 2024. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. +Feiyu Zhang, Liangzhi Li, Junhao Chen, Zhouqiang Jiang, Bowen Wang, and Yiming Qian. Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning. arXiv preprint arXiv:2308.12043, 2023a. +Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, and Bo Li. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv preprint arXiv:2308.03303, 2023b. +Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, and Bohan Zhuang. Loraprune: Pruning meets low-rank parameter-efficient fine-tuning. arXiv preprint arXiv:2305.18403, 2023c. +Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512, 2023d. +Hongyun Zhou, Xiangyu Lu, Wang Xu, Conghui Zhu, Tiejun Zhao, and Muyun Yang. Lora-drop: Efficient lora parameter pruning based on output evaluation. arXiv preprint arXiv:2402.07721, 2024. +Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez De Ocariz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, and Justin Solomon. Asymmetry in low-rank adapters of foundation models. arXiv preprint arXiv:2402.16842, 2024. + +# A Related Works + +Parameter-Efficient Fine-Tuning. Parameter-efficient fine-tuning (PEFT) methods for LLMs (Houlsby et al., 2019; Pfeiffer et al., 2020; Li & Liang, 2021; Lester et al., 2021; Liu et al., 2021; Hu et al., 2021) have received increasing attention in recent years. Among them, LoRA (Hu et al., 2021), which introduces trainable low-rank matrices, has become one of the most widely adopted PEFT methods due to its strong performance and efficiency. LoRI is motivated by reducing parameter redundancy in LoRA through an asymmetric design: we freeze the projection matrices $A$ and enforce sparsity on the matrices $B$ . Our work is closely related to several lines of research. In terms of parameter efficiency, our goal is shared by methods such as IA3 (Liu et al., 2022), VeRA (Kopiczko et al., 2023), and FourierFT (Gao et al., 2024). More specifically, our approach builds on the concept of asymmetric LoRA variants, which has been explored in works like LoRA-FA (Zhang et al., 2023b), AsymmetryLoRA (Zhu et al., 2024), and HydraLoRA (Tian et al., 2024). However, LoRI is distinct from these works by uniquely combining frozen $A$ with sparsely updated $B$ . This targeted, asymmetric pruning of only the $B$ matrices also differentiates our method from general LoRA pruning techniques like Loraprune (Zhang et al., 2023c), LoRADrop (Zhou et al., 2024), and SoRA (Ding et al., 2023), as well as SVD-based approaches such as AdaLoRA (Zhang et al., 2023d) and PiSSA (Meng et al., 2024). + +Model Merging. Achieving multi-task capabilities typically involves training on a mixture of diverse task datasets (Caruana, 1997; Sener & Koltun, 2018), which is often prohibitively expensive in time and compute. As an alternative, model merging has gained attention for combining multiple task-specific models into a single model (Matena & Raffel, 2022; Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). Fisher Merging (Matena & Raffel, 2022) uses weights from the Fisher information matrix to combine parameters, while Task Arithmetic (Ilharco et al., 2022) employs predefined scaling factors. TIES-Merging (Yadav et al., 2023) prunes low-magnitude parameters and merges those with consistent signs, and DARE (Yu et al., 2024) applies random pruning with rescaling. However, identifying the optimal merging method often requires trial and error. More recently, there has been growing interest in merging task-specific LoRA adapters (Chronopoulou et al., 2023; Huang et al., 2023; Wu et al., 2024; Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024), often utilizing Mixture-of-Experts (MoE) architectures. Nonetheless, these methods typically require additional training to coordinate the adapters effectively. In contrast, LoRI eliminates the need for manual selection of merging methods or additional training. By ensuring approximate orthogonality between adapters, LoRI minimizes interference and preserves task-specific performance. + +Catastrophic Forgetting. Catastrophic forgetting is a fundamental challenge in continual learning (McCloskey & Cohen, 1989; Ramasesh et al., 2021; Liang et al., 2023; Wang et al., 2024b), where neural networks struggle to retain previously learned knowledge when adapting to new tasks. Wu et al. (2022) analyzed this phenomenon using layer-wise and task-wise probing to assess knowledge retention across tasks. Several studies (Dong et al., 2023; Luo et al., 2023) have empirically examined catastrophic forgetting in the continual fine-tuning of LLMs. To mitigate catastrophic forgetting, various approaches have been proposed. Rehearsal-based methods (Rolnick et al., 2019; Shin et al., 2017) store or generate past data to reinforce prior knowledge during training. Parameter isolation methods (Rusu et al., 2016; Mallya & Lazebnik, 2018; Konishi et al., 2023; Panda et al., 2024) allocate separate subnetworks or sparsely mask parameters for different tasks to prevent interference. Additionally, O-LoRA (Wang et al., 2023) learns tasks in distinct low-rank subspaces while ensuring orthogonality between them. LoRI falls under the category of parameter isolation methods, leveraging sparse task-specific masks to mitigate catastrophic forgetting during continual learning. + +# B Algorithm of LoRI + +The full procedure of LoRI is summarized in Algorithm 1. + +Algorithm 1: LoRA with Reduced Interference (LoRI) +Require: Task $t$ , mask calibration dataset $\mathcal{D}_t^C$ , adaptation dataset $\mathcal{D}_t$ , sparsity ratio $s$ , model $f$ loss function $\mathcal{L}_t$ , learning rate $\eta_t$ +1: for each layer $l = 1,\ldots ,L$ do +2: for each projection $m = 1,\dots ,M$ do +3: Initialize: $A_{t}^{(l,m)}\in \mathbb{R}^{d_{\mathrm{in}}\times r}\leftarrow \mathcal{U}(-\sqrt{\frac{3}{d_{\mathrm{in}}}},\sqrt{\frac{3}{d_{\mathrm{in}}}}),B_{t}^{(l,m)}\in \mathbb{R}^{r\times d_{\mathrm{out}}}\leftarrow 0$ +4: end for +5: end for +6: for each batch $(x,y)$ sampled from $\mathcal{D}_t^C$ do ▷ Calibration steps +7: for each $(l,m)$ do +8: $B_{t}^{(l,m)}\gets B_{t}^{(l,m)} - \eta_{t}\cdot \nabla_{B_{t}^{(l,m)}}\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))$ +9: end for +10: end for +11: $\tau_t\gets \mathrm{Quantile}_s\left(\bigcup_{l,m}|B_t^{(l,m)}|\right)$ ▷ Compute global threshold $\tau_t$ +12: for each $(l,m)$ do +13: $M_t^{(l,m)}\gets \mathbb{I}\left(|B_t^{(l,m)}|\geq \tau_t\right)$ ▷ Generate mask for top- $(1 - s)\%$ entries +14: $B_{t}^{(l,m)}\gets 0$ ▷ Reset to zero before adaptation +15: end for +16: for each batch $(x,y)$ sampled from $\mathcal{D}_t$ do ▷ Adaptation steps +17: for each $(l,m)$ do +18: $B_{t}^{(l,m)}\gets B_{t}^{(l,m)} - \eta_{t}\cdot \left(\nabla_{B_{t}^{(l,m)}}\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))\odot M_{t}^{(l,m)}\right)$ +19: end for +20: end for + +# C Proof of Property 1 + +Proof. Our goal is to show that the Frobenius inner product $\langle \Delta_s, \Delta_t \rangle_F$ converges to zero in probability. Let $\tilde{B}_s = B_s \odot M_s$ and $\tilde{B}_t = B_t \odot M_t$ . The inner product is given by: + +$$ +\left\langle \Delta_ {s}, \Delta_ {t} \right\rangle_ {F} = \operatorname {T r} \left(\Delta_ {s} ^ {\top} \Delta_ {t}\right) = \operatorname {T r} \left(\tilde {B} _ {s} ^ {\top} A _ {s} ^ {\top} A _ {t} \tilde {B} _ {t}\right). \tag {9} +$$ + +We will prove this by showing that the random matrix $X = A_{s}^{\top}A_{t}$ converges to the zero matrix in probability as $d_{\mathrm{in}} \to \infty$ . + +Let $a_{s}^{k}, a_{t}^{l} \in \mathbb{R}^{d_{\mathrm{in}}}$ be the $k$ -th and $l$ -th columns of $A_{s}$ and $A_{t}$ , respectively. The entries of these vectors are i.i.d. from a Kaiming Uniform distribution $U[-a, a]$ where $a = \sqrt{3 / d_{\mathrm{in}}}$ . This implies a mean of 0 and variance of $\sigma^2 = a^2 / 3 = 1 / d_{\mathrm{in}}$ . An entry of $X$ is the inner product $X_{kl} = (a_{s}^{k})^{\top} a_{t}^{l} = \sum_{i=1}^{d_{\mathrm{in}}} (A_{s})_{ik} (A_{t})_{il}$ . + +Let $Z_{i} = (A_{s})_{ik}(A_{t})_{il}$ . The terms $Z_{i}$ are i.i.d. with $\mathbb{E}[Z_i] = \mathbb{E}[(A_s)_{ik}]\mathbb{E}[(A_t)_{il}] = 0$ . Each term is bounded: $|Z_{i}| \leq a^{2} = 3 / d_{\mathrm{in}}$ . We apply Hoeffding's inequality to the sum $\sum_{i=1}^{d_{\mathrm{in}}} Z_{i}$ , where each term lies in $[-3 / d_{\mathrm{in}}, 3 / d_{\mathrm{in}}]$ : + +$$ +\mathbb {P} \left(\left| X _ {k l} \right| \geq t\right) = \mathbb {P} \left(\left| \sum_ {i = 1} ^ {d _ {\mathrm {i n}}} Z _ {i} \right| \geq t\right) \leq 2 \exp \left(\frac {- 2 t ^ {2}}{\sum_ {i = 1} ^ {d _ {\mathrm {i n}}} (6 / d _ {\mathrm {i n}}) ^ {2}}\right) = 2 \exp \left(\frac {- t ^ {2} d _ {\mathrm {i n}}}{1 8}\right). \tag {10} +$$ + +We now bound the probability that any of the $r^2$ entries of $X$ exceeds a threshold $t$ using the union bound: + +$$ +\mathbb {P} \left(\max _ {k, l} | X _ {k l} | \geq t\right) = \mathbb {P} \left(\bigcup_ {k, l = 1} ^ {r} \{| X _ {k l} | \geq t \}\right) \leq \sum_ {k, l = 1} ^ {r} \mathbb {P} \left(| X _ {k l} | \geq t\right) \leq 2 r ^ {2} \exp \left(\frac {- t ^ {2} d _ {\mathrm {i n}}}{1 8}\right). \tag {11} +$$ + +Table 4: Hyperparameter settings for LoRI on NLU datasets. + +
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-41e-51e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
+ +We can now show that $\| X \|_F$ is small with high probability. Let the failure probability be $\delta$ . By setting the bound from the previous step to $\delta$ , we can solve for $t$ : + +$$ +\delta = 2 r ^ {2} \exp \left(\frac {- t ^ {2} d _ {\mathrm {i n}}}{1 8}\right) \Longrightarrow t = \sqrt {\frac {1 8 \log \left(2 r ^ {2} / \delta\right)}{d _ {\mathrm {i n}}}}. \tag {12} +$$ + +With probability at least $1 - \delta$ , we have $\max_{k,l} |X_{kl}| \leq t$ . This allows us to bound the Frobenius norm of $X$ : + +$$ +\left\| X \right\| _ {F} ^ {2} = \sum_ {k, l = 1} ^ {r} \left| X _ {k l} \right| ^ {2} \leq r ^ {2} \left(\max _ {k, l} \left| X _ {k l} \right|\right) ^ {2} \leq r ^ {2} t ^ {2}. \tag {13} +$$ + +Thus, with probability at least $1 - \delta$ : + +$$ +\| X \| _ {F} \leq r \cdot t = r \sqrt {\frac {1 8 \log (2 r ^ {2} / \delta)}{d _ {\mathrm {i n}}}} = O \left(r \sqrt {\frac {\log r}{d _ {\mathrm {i n}}}}\right). \tag {14} +$$ + +Since $r \ll d_{\mathrm{in}}$ , the term $\| X \|_F \to 0$ as $d_{\mathrm{in}} \to \infty$ . This shows that $X$ converges to the zero matrix in probability. + +Finally, we bound the magnitude of the original inner product using the Cauchy-Schwarz inequality for the Frobenius inner product and the sub-multiplicative property of the Frobenius norm: + +$$ +\begin{array}{l} \left| \left\langle \Delta_ {s}, \Delta_ {t} \right\rangle_ {F} \right| = \left| \operatorname {T r} \left(\tilde {B} _ {s} ^ {\top} X \tilde {B} _ {t}\right) \right| = \left| \left\langle \tilde {B} _ {s}, X \tilde {B} _ {t} \right\rangle_ {F} \right| \\ \leq \left\| \tilde {B} _ {s} \right\| _ {F} \| X \tilde {B} _ {t} \| _ {F} \tag {15} \\ \leq \| \tilde {B} _ {s} \| _ {F} \| X \| _ {F} \| \tilde {B} _ {t} \| _ {F}. \\ \end{array} +$$ + +The norms $\| \tilde{B}_s\| _F$ and $\| \tilde{B}_t\| _F$ are finite, as determined by the trained adapters. Since we have shown that $\| X\| _F\to 0$ in probability, the entire expression must also converge to 0 in probability. + +# D Hyperparameter Settings + +We summarize the hyperparameter settings used for LoRI in Tables 4, 5, 6, and 7. These include settings for different tasks (NLU, math, code, safety), adapter variants (LoRI-D, LoRI-S), base models (Llama-3-8B and Mistral-7B), and ranks (32 and 64). + +For the merging experiments, the hyperparameter settings for merging four adapters are provided in Tables 8 and 9, while those for merging three adapters are provided in Table 10. + +Table 5: Hyperparameter settings for LoRI on the math dataset GSM8K. + +
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α646412812864643264
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-35e-55e-41e-45e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs3
Whereq, k, v, o, gate, up, down
+ +Table 6: Hyperparameter settings for LoRI on the code dataset CodeAlpaca. + +
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs2
Whereq, k, v, o, gate, up, down
+ +Table 7: Hyperparameter settings for LoRI on the safety dataset Saferpaca. + +
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
+ +Table 8: Hyperparameter settings for merging four adapters using Llama-3-8B. + +
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
+ +Table 9: Hyperparameter settings for merging four adapters using Mistral-7B. + +
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelMistralMistralMistralMistralMistralMistralMistralMistralMistral
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
+ +Table 10: Hyperparameter settings for merging three adapters using Llama-3-8B. + +
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.50.50.50.50.50.50.50.40.4
Density--0.30.70.7----
+ +Table 11: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
VeRA1.38M (0.02%)64.481.862.667.385.760.978.556.969.8
IA31.70M (0.02%)68.684.874.577.689.475.790.675.079.5
LoRA-FA44M (0.54%)74.089.683.383.893.488.696.187.487.0
AdaLoRA84M (1.03%)75.689.282.483.191.087.894.487.686.4
rsLoRA84M (1.03%)72.884.878.876.087.085.091.082.882.3
PiSSA84M (1.03%)68.184.478.275.185.182.889.382.880.7
LoRA+84M (1.03%)67.080.378.570.182.381.588.979.778.5
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
+ +# E Additional Experimental Results + +# E.1 Comparison with Additional PEFT Methods + +To provide a comprehensive benchmark, we evaluate LoRI against several widely adopted parameter-efficient fine-tuning (PEFT) methods, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), LoRA+ (Hayou et al., 2024), and DoRA (Liu et al., 2024). The results, presented in Tables 11 and 12, demonstrate that our proposed methods are highly effective. + +LoRI-D, which uses 44M trainable parameters (0.54% of the full model and half of LoRA's), consistently achieves state-of-the-art performance, particularly on NLU and code generation benchmarks. LoRI-S, despite its aggressive sparsity (0.05% of the full model and 5% of LoRA's), remains highly competitive and often surpasses other PEFT methods. While VeRA and IA3 are more parameter-efficient, their performance is substantially lower than LoRI-S. Despite this efficiency, LoRI-D and LoRI-S deliver comparable – and often superior – performance across NLU, math, code, and safety domains. These results underscore two key insights: (1) effective adaptation does not require updating the projection matrices $A$ , as demonstrated by LoRI-D; and (2) the matrices $B$ contains significant redundancy that can be effectively pruned, as shown by LoRI-S. + +# E.2 Results with Rank $r = 64$ + +We evaluate several adaptation methods using a higher adapter rank of $r = 64$ across a diverse set of tasks. This allows for more expressive adapter representations while still maintaining efficiency compared to full fine-tuning. Table 13 presents performance on eight natural language understanding (NLU) benchmarks, while Table 14 includes results on GSM8K (math), HumanEval (code), and HEx-PHI (safety). Across Llama-3 and Mistral models, LoRI-D and LoRI-S consistently perform competitively, often outperforming larger adapter methods like LoRA and DoRA, while using fewer parameters. + +Table 12: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEX-PHI (safety) benchmarks using Llama-3 with $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
VeRA1.38M (0.02%)30.632.445.150.974.7
IA31.70M (0.02%)48.032.745.651.585.4
LoRA-FA44M (0.54%)64.842.957.564.294.1
AdaLoRA84M (1.03%)63.333.545.049.491.9
rsLoRA84M (1.03%)61.328.435.538.398.1
PiSSA84M (1.03%)61.332.040.343.397.8
LoRA+84M (1.03%)61.733.042.746.098.8
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
+ +Table 13: Performance comparison of different adaptation methods on eight natural language understanding (NLU) benchmarks using Llama-3 and Mistral with $r = 64$ . **Bold indicates the best-performing method, and underline indicates the second-best.** + +
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA168M (2.05%)75.289.081.282.392.489.195.388.286.6
DoRA169M (2.06%)76.489.082.082.692.387.595.187.386.5
LoRI-D88M (1.07%)75.890.482.783.392.688.695.987.487.1
LoRI-S8.8M (0.11%)76.590.281.983.593.887.596.287.287.1
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA168M (2.26%)77.490.283.584.093.089.395.689.487.8
DoRA169M (2.28%)76.090.683.583.392.889.695.787.687.4
LoRI-D88M (1.18%)75.990.783.782.092.190.096.487.887.3
LoRI-S8.8M (0.12%)74.290.783.583.092.689.595.889.587.3
+ +Table 14: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with $r = 64$ . **Bold indicates the best-performing method, and **underline indicates the second-best.** + +
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA168M (2.05%)63.938.652.959.294.1
DoRA169M (2.06%)63.839.453.659.793.4
LoRI-D88M (1.07%)63.841.955.460.396.6
LoRI-S8.8M (0.11%)61.844.157.462.496.3
Mistral-7B
FFT7.24G (100%)55.530.539.341.794.1
LoRA168M (2.26%)56.733.943.146.995.9
DoRA169M (2.28%)57.832.943.347.296.6
LoRI-D88M (1.18%)58.233.343.647.390.9
LoRI-S8.8M (0.12%)58.432.142.246.393.4
+ +Table 15: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best. + +
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
ConcatLoRA82.552.432.340.844.175.6
LinearLoRA81.448.033.141.643.976.6
MagnitudeLoRA77.542.732.741.845.680.9
TIESLoRA31.323.532.040.243.581.9
DARELoRA76.143.032.041.044.683.4
ConcatLoRI-D79.352.434.442.845.583.8
LinearLoRI-D78.150.535.242.745.579.7
ConcatLoRI-S79.246.133.341.645.979.4
LinearLoRI-S75.540.328.836.039.683.1
+ +Table 16: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA73.989.181.181.492.483.094.484.585.0
LinearLoRA73.788.881.180.791.684.493.984.184.8
MagnitudeLoRA72.087.176.879.491.781.590.476.481.9
TIESLoRA68.283.867.369.587.869.273.361.472.6
DARELoRA70.785.074.177.590.776.686.871.079.1
ConcatLoRI-D74.087.777.881.092.481.092.778.983.2
LinearLoRI-D73.787.776.780.392.180.192.077.782.5
ConcatLoRI-S71.886.276.179.291.578.689.876.381.2
LinearLoRI-S70.785.375.178.090.875.086.571.379.1
+ +# E.3 Merging Four Adapters + +To support multi-task learning within a unified model, we study the merging of four task-specific adapters using various strategies. Table 15 reports results using Mistral-7B across a range of tasks. Additionally, Tables 16 and 17 break down the performance of NLU on individual benchmarks using Llama-3 and Mistral, respectively. We compare merging methods such as concatenated merging, linear merging, magnitude pruning, TIES, and DARE. LoRI-based approaches demonstrate strong performance and stability when merging multiple adapters. + +# E.4 Merging Three Adapters + +We further evaluate the merging of three adapters to understand performance when adapting to a smaller set of tasks. Tables 18 and 19 summarize the results for Llama-3 across different benchmarks. Similar to the four-task setting, LoRI-D remains a strong performer, often exceeding the performance of LoRA. These results highlight that LoRI-based methods are effective with varying levels of task diversity. + +# E.5 Pruning-Based Merging Methods + +Finally, we explore pruning-based merging methods, which aim to compress and combine multiple adapters by selectively retaining important weights. We focus on three methods: magnitude pruning, TIES, and DARE. Results are reported for merging both four-adapter + +Table 17: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best. + +
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D75.990.683.083.691.988.495.987.487.1
ConcatLoRA69.088.078.179.990.984.292.477.882.5
LinearLoRA69.286.977.978.590.282.191.575.181.4
MagnitudeLoRA68.784.974.475.989.177.585.664.177.5
TIESLoRA18.469.840.714.021.920.114.650.931.3
DARELoRA69.484.373.174.288.974.382.661.876.1
ConcatLoRI-D68.485.975.676.689.481.385.971.179.3
LinearLoRI-D66.386.074.975.388.980.885.068.078.1
ConcatLoRI-S72.685.474.676.589.780.186.068.979.2
LinearLoRI-S67.683.872.073.088.374.680.964.375.5
+ +Table 18: Comparison of merging methods for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method, and underline indicates the second-best. + +
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
ConcatLoRA86.454.513.019.821.8
LinearLoRA86.151.98.814.516.7
MagnitudeLoRA83.852.023.337.443.0
TIESLoRA79.426.936.348.753.7
DARELoRA81.153.336.049.553.9
ConcatLoRI-D84.859.641.556.461.6
LinearLoRI-D84.657.638.351.656.8
ConcatLoRI-S83.351.831.244.649.8
LinearLoRI-S81.041.726.640.044.6
+ +Table 19: Comparison of merging methods for combining three adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank $r = 32$ . **Bold** indicates the best-performing method, and **underline** indicates the second-best. + +
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA74.789.681.882.993.786.295.886.886.4
LinearLoRA73.989.681.481.993.585.595.687.186.1
MagnitudeLoRA72.287.278.981.292.283.293.082.483.8
TIESLoRA69.584.874.078.491.277.488.871.479.4
DARELoRA71.085.675.879.591.078.890.776.281.1
ConcatLoRI-D73.889.079.881.093.083.094.684.084.8
LinearLoRI-D74.188.480.281.392.982.194.183.684.6
ConcatLoRI-S70.387.279.180.892.482.193.281.383.3
LinearLoRI-S61.586.478.079.591.780.891.378.581.0
+ +Table 20: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method within each group. + +
MergingAdaptationNLUGSM&KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
MagnitudeLoRA81.950.324.136.742.474.4
MagnitudeLoRI-D84.350.533.345.251.485.9
MagnitudeLoRI-S76.435.225.236.541.068.4
TIESLoRA72.624.032.546.351.777.8
TIESLoRI-D79.138.040.354.659.885.3
TIESLoRI-S70.425.934.648.453.277.8
DARELoRA79.148.934.148.753.574.1
DARELoRI-D83.452.035.451.357.881.9
DARELoRI-S73.427.234.848.153.575.3
+ +Table 21: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank $r = 32$ . Bold indicates the best-performing method within each group. + +
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
MagnitudeLoRA77.542.732.741.845.680.9
MagnitudeLoRI-D76.041.529.036.038.779.4
MagnitudeLoRI-S70.532.428.136.139.377.5
TIESLoRA31.323.532.040.243.581.9
TIESLoRI-D65.045.435.344.547.868.4
TIESLoRI-S67.832.928.637.240.878.4
DARELoRA76.143.032.041.044.683.4
DARELoRI-D76.242.329.237.140.789.1
DARELoRI-S71.934.329.240.544.985.0
+ +(Tables 20 and 21) and three-adapter (Table 22) settings, using Llama-3 and Mistral as base models. LoRI-D consistently achieves strong performance across all pruning-based merging methods. However, the performance of LoRI-S is somewhat lower in these settings. This is because pruning-based methods operate on the dense $A$ matrices but not on the sparse $B$ matrices. This mismatch leads to an inconsistent pruning scheme, which can result in a loss of effectiveness. + +# F Additional Ablation Studies + +Figure 5 presents GSM8K accuracy across a grid of sparsity ratios and learning rates using Mistral-7B with rank $r = 64$ . We observe that sparse adapters require larger learning rates to train effectively. In particular, models with high sparsity (e.g., above $70\%$ ) perform best with a learning rate of $10^{-4}$ or higher. This suggests that stronger optimization is necessary to compensate for limited capacity in sparse adapters. + +In Figure 6, we analyze how sparsity is distributed across layers and projections when enforcing $90\%$ global sparsity on GSM8K. We find that feedforward (FFN) projections tend to retain more parameters – i.e., they exhibit lower sparsity – than self-attention projections. + +Table 22: Comparison of magnitude pruning, TIES, and DARE for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank $r = 32$ . Bold indicates the best-performing method within each group. + +
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
MagnitudeLoRA83.852.023.337.443.0
MagnitudeLoRI-D84.653.734.848.954.7
MagnitudeLoRI-S77.836.625.538.843.8
TIESLoRA79.426.936.348.753.7
TIESLoRI-D82.142.239.252.757.7
TIESLoRI-S73.835.234.847.952.5
DARELoRA81.153.336.049.553.9
DARELoRI-D84.055.233.845.851.8
DARELoRI-S75.336.636.248.953.4
+ +![](images/decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg) +Figure 5: GSM8K accuracy under different sparsity ratios and learning rates. Base model: Mistral-7B, rank $r = 64$ . + +This indicates that FFN components are more critical for effective adaptation. Additionally, sparsity decreases toward the top of the network, suggesting that higher layers are more important for task-specific specialization. + +Lastly, Figure 7 explores the effect of merging weights when combining three LoRI-S adapters using concatenated and linear merging. We find a noticeable trade-off between performance on code tasks and other domains (e.g., NLU and math). Higher merging weights can improve NLU performance but tend to degrade performance on code, highlighting the challenge of balancing generalization and specialization in multi-task settings. + +![](images/9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg) +Figure 6: Sparsity ratios across layers and projections under a $90\%$ sparsity on GSM8K. Base model: Llama-3-8B, rank $r = 32$ . + +![](images/20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg) +(a) Concatnated merging with LoRI-S. + +![](images/22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg) +(b) Linear merging with LoRI-S. +Figure 7: Ablation study on the effect of merging weights when combining three adapters. Base model: Llama-3-8B, rank $r = 32$ . \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07448/images/007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg b/data/2025/2504_07xxx/2504.07448/images/007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d13db79d4aec331afcffe8171c1d3aa45d5a5cd1 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de26419e6333b683b918668a57dffe1d493f752f1fc3e9a9dd4daafbc899e8e0 +size 61719 diff --git a/data/2025/2504_07xxx/2504.07448/images/019a56ebb137460d7b3baa0a71dcef549140a94813ddb15f6ec50420b41375a0.jpg b/data/2025/2504_07xxx/2504.07448/images/019a56ebb137460d7b3baa0a71dcef549140a94813ddb15f6ec50420b41375a0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..39677e03c006ec716ed455cc5e9bbd1377133ece --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/019a56ebb137460d7b3baa0a71dcef549140a94813ddb15f6ec50420b41375a0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f503fac38c2b2353c5f898add3e6f606f42f9567b65ed7b2451a2e796bf0090 +size 54598 diff --git a/data/2025/2504_07xxx/2504.07448/images/04ba35cb4761e76d3a6c939ca6c0974b80c68b7418d58fb1f2388d3f92ce31bd.jpg b/data/2025/2504_07xxx/2504.07448/images/04ba35cb4761e76d3a6c939ca6c0974b80c68b7418d58fb1f2388d3f92ce31bd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..12cdc3b2a06fbd102ed090729ba600ceb2b83ebe --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/04ba35cb4761e76d3a6c939ca6c0974b80c68b7418d58fb1f2388d3f92ce31bd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10563f1f5c94853287499c37980915bbc027a9df1465374be3bfe3e9bd4538fa +size 79522 diff --git a/data/2025/2504_07xxx/2504.07448/images/162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg b/data/2025/2504_07xxx/2504.07448/images/162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8e591ebd79b29c877060a24c49976266b8654e3d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:130dce33d6cbfbbbb4f1148ed3990281a8385a2a1190b5e6fc6726c59d03ec0f +size 27414 diff --git a/data/2025/2504_07xxx/2504.07448/images/20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg b/data/2025/2504_07xxx/2504.07448/images/20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2411e11aaee45e4136d7393ad508fa6af3ee9386 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:285ee26425348ddf1b4e9b96a269ed5439f336da83496371d26b855718b1fed5 +size 24383 diff --git a/data/2025/2504_07xxx/2504.07448/images/22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg b/data/2025/2504_07xxx/2504.07448/images/22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c622239f24ab984ed70a5a78c1e9dbb9249c20fc --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c86b412d6676636f23f0458a63c371cd452bd601d450a357bb70dd3faeecb536 +size 25022 diff --git a/data/2025/2504_07xxx/2504.07448/images/2782a629cbad07f42a8fe9ab46b398148c7fe5252ab8de9463f9161a6a55fdc6.jpg b/data/2025/2504_07xxx/2504.07448/images/2782a629cbad07f42a8fe9ab46b398148c7fe5252ab8de9463f9161a6a55fdc6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7e315761eed9a547939d18d973d2f35d9a951697 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/2782a629cbad07f42a8fe9ab46b398148c7fe5252ab8de9463f9161a6a55fdc6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97c7471fd77227b02293df44fb795278a314cefeb34258b75a3fa591d7c4ef3d +size 76052 diff --git a/data/2025/2504_07xxx/2504.07448/images/2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg b/data/2025/2504_07xxx/2504.07448/images/2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..415ff022e56c85ca6554014b30557381295be758 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b9b8974526f52650b56b92e2e062822522ec95c91fe558005d57a9c263f09db +size 14257 diff --git a/data/2025/2504_07xxx/2504.07448/images/31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg b/data/2025/2504_07xxx/2504.07448/images/31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..58ee3a167f1258074664ad4433161788b1ed7e16 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9403f3b69c075408ea0d28add493c3e0c9164b5c28184885d4d5eda1213e4fe +size 24060 diff --git a/data/2025/2504_07xxx/2504.07448/images/374ef3c8e56f616defa3b1ca41b03317a863716e9f592e6052743ef31155dda5.jpg b/data/2025/2504_07xxx/2504.07448/images/374ef3c8e56f616defa3b1ca41b03317a863716e9f592e6052743ef31155dda5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7d0a2ca5fa394be048e8489cf4fd02deefdf8ad9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/374ef3c8e56f616defa3b1ca41b03317a863716e9f592e6052743ef31155dda5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:587e7c2c99f5c5d8f7b865c7e088b8fd48e9c29ea92c781b5646307e24555bce +size 83923 diff --git a/data/2025/2504_07xxx/2504.07448/images/3e3cf304781f00eeed6139ed70a45546dc84631bdaf0303343be9afe0bde0460.jpg b/data/2025/2504_07xxx/2504.07448/images/3e3cf304781f00eeed6139ed70a45546dc84631bdaf0303343be9afe0bde0460.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3f3458c8a8a45e34be9f966948e63e2d6c33ad60 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/3e3cf304781f00eeed6139ed70a45546dc84631bdaf0303343be9afe0bde0460.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ec564679ee5315336945545576b540366247a59b2ecb709539c2bc3bfaec4d7 +size 80453 diff --git a/data/2025/2504_07xxx/2504.07448/images/4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg b/data/2025/2504_07xxx/2504.07448/images/4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg new file mode 100644 index 0000000000000000000000000000000000000000..22474f31bf827393cad91de10be813fc9b491d3d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:562e266eef839a44ef1c9d06007ab2c985ad3337a6447bb8ce09362cb5d1cbcb +size 22684 diff --git a/data/2025/2504_07xxx/2504.07448/images/4d9dad02b32518b5385892b3fb0072e07e405fa5096930d1c3a9e394ad57b403.jpg b/data/2025/2504_07xxx/2504.07448/images/4d9dad02b32518b5385892b3fb0072e07e405fa5096930d1c3a9e394ad57b403.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c6b1032d9b41e3cc3a8f6801ebc5ed0860fbeb8a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/4d9dad02b32518b5385892b3fb0072e07e405fa5096930d1c3a9e394ad57b403.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a36eeb8b2639ffe24c445a56ee83f45305a1526505ce0db18a0343bed2182ed0 +size 10082 diff --git a/data/2025/2504_07xxx/2504.07448/images/51115363e44e71788c130af3a59a40679a0f7fc02dfb30aac2bca32a1d13f5b2.jpg b/data/2025/2504_07xxx/2504.07448/images/51115363e44e71788c130af3a59a40679a0f7fc02dfb30aac2bca32a1d13f5b2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..08a42d505e427a024fe90d8c692f500c1e9a4651 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/51115363e44e71788c130af3a59a40679a0f7fc02dfb30aac2bca32a1d13f5b2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e014973a765c6e1e7a16bb53dd50bfeb3c5ae0f67f1cbe730180b99581f42b16 +size 93726 diff --git a/data/2025/2504_07xxx/2504.07448/images/574b386d7041c8bb5c61c6f9568f32dc489aad22da229933747d96a0c22a481e.jpg b/data/2025/2504_07xxx/2504.07448/images/574b386d7041c8bb5c61c6f9568f32dc489aad22da229933747d96a0c22a481e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..063cf5914ffc6ecd2d3b415daacf616b356c233a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/574b386d7041c8bb5c61c6f9568f32dc489aad22da229933747d96a0c22a481e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae2f7cab1d2e6375487f2b9ec5a8592f4bb954f67ceb26a932e923e416a7b8cd +size 79121 diff --git a/data/2025/2504_07xxx/2504.07448/images/5a39e390791af67f5fee2c41cc9b9bf7cf985c272f014076e5e6ca15c1a4a159.jpg b/data/2025/2504_07xxx/2504.07448/images/5a39e390791af67f5fee2c41cc9b9bf7cf985c272f014076e5e6ca15c1a4a159.jpg new file mode 100644 index 0000000000000000000000000000000000000000..79358fd4fbf40d44978e9556839c50b69ed6686f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/5a39e390791af67f5fee2c41cc9b9bf7cf985c272f014076e5e6ca15c1a4a159.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:596f7971b83b9c9a01a9488fb5fa0977bf68671d833022276cea9f759055f4d2 +size 79989 diff --git a/data/2025/2504_07xxx/2504.07448/images/5b275f33c278c822894b05d2926f30adb0610e3978ae0b990b1e4ca4dbdb6824.jpg b/data/2025/2504_07xxx/2504.07448/images/5b275f33c278c822894b05d2926f30adb0610e3978ae0b990b1e4ca4dbdb6824.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d8cf99aee93c6f079660a1e1bb210033aeb6856f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/5b275f33c278c822894b05d2926f30adb0610e3978ae0b990b1e4ca4dbdb6824.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:365bee17b8e5d3ee536019cb3733630f3536015bf2c6438931058b75789c3c9f +size 74279 diff --git a/data/2025/2504_07xxx/2504.07448/images/5d742e4240f4550b50f8a045f6eade200f56a5c2028c9f9964e737210a4a0f04.jpg b/data/2025/2504_07xxx/2504.07448/images/5d742e4240f4550b50f8a045f6eade200f56a5c2028c9f9964e737210a4a0f04.jpg new file mode 100644 index 0000000000000000000000000000000000000000..279c9dbd31803f6a6ae9b0721f8fee783652916f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/5d742e4240f4550b50f8a045f6eade200f56a5c2028c9f9964e737210a4a0f04.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62ab0c6c3d4a82c38f79d7e15ec5a2d3bcb084a64cccea939875b076e0eda2e8 +size 78438 diff --git a/data/2025/2504_07xxx/2504.07448/images/621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg b/data/2025/2504_07xxx/2504.07448/images/621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e9ded4279e904c40c19790c64a14b6842c298150 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:459b5df7fab87ccef8f7c241deeb211aa76d23c4cf28a7fcccb065c3aa2c05dc +size 16663 diff --git a/data/2025/2504_07xxx/2504.07448/images/64c8ddd644dd9eebd26a8802b40d9d415be03562dcaf162028b63887cd978290.jpg b/data/2025/2504_07xxx/2504.07448/images/64c8ddd644dd9eebd26a8802b40d9d415be03562dcaf162028b63887cd978290.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2413b988128131bff2ceb0af5accd81ed4e4b666 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/64c8ddd644dd9eebd26a8802b40d9d415be03562dcaf162028b63887cd978290.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a3d34c719b49edc550545d6a0e9d146febbfb01fa5ceb326eccf38b71f4dd0b4 +size 55020 diff --git a/data/2025/2504_07xxx/2504.07448/images/68f65f00fb4ff91cf0266bf22ab3937f4f4ee0ae1754c3e599988ff0f41544ea.jpg b/data/2025/2504_07xxx/2504.07448/images/68f65f00fb4ff91cf0266bf22ab3937f4f4ee0ae1754c3e599988ff0f41544ea.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6b8efe3efb2ebbbb41d9c26086e3a992f99bf43d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/68f65f00fb4ff91cf0266bf22ab3937f4f4ee0ae1754c3e599988ff0f41544ea.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:224c6bacf8822a7c2b2322224b23730db150c92101591d9e88f742262fce9c2d +size 9924 diff --git a/data/2025/2504_07xxx/2504.07448/images/6a2f5092ef54892e155c66571e928c628d5ab79d36be9f1b478d7394f8a4f45a.jpg b/data/2025/2504_07xxx/2504.07448/images/6a2f5092ef54892e155c66571e928c628d5ab79d36be9f1b478d7394f8a4f45a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0e1cae63798df403346ca9d4d5278d87ed14401d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/6a2f5092ef54892e155c66571e928c628d5ab79d36be9f1b478d7394f8a4f45a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4041c876f34c7eb96dbe75cff0800c0365488c3a420608593428f471f939f55c +size 6404 diff --git a/data/2025/2504_07xxx/2504.07448/images/6e1c12611e7500e7e0ea7220d14c5659b68282ff03540ee717e2913753d782d7.jpg b/data/2025/2504_07xxx/2504.07448/images/6e1c12611e7500e7e0ea7220d14c5659b68282ff03540ee717e2913753d782d7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e9e8d6c81bdfe9c39e1ccc6493d9a1cd7052750b --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/6e1c12611e7500e7e0ea7220d14c5659b68282ff03540ee717e2913753d782d7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89f8058878ad812cb8610b0f4c60e526d70d4967778802092e0c3db54157ff7f +size 13740 diff --git a/data/2025/2504_07xxx/2504.07448/images/6f8e6b8bbcc547d245480d4bd82a15987de386e346f1158ef17702d74fdc3063.jpg b/data/2025/2504_07xxx/2504.07448/images/6f8e6b8bbcc547d245480d4bd82a15987de386e346f1158ef17702d74fdc3063.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4191f83da29dd54c4c96378230e0f7d3f3cd8388 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/6f8e6b8bbcc547d245480d4bd82a15987de386e346f1158ef17702d74fdc3063.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48d0a43a2785b5afa639c45f199735df1a885235880b6979ccb65cc69bc47cc0 +size 73404 diff --git a/data/2025/2504_07xxx/2504.07448/images/7047bc959650d5b78d0f91b4475498ec6d1bd0f2863f445c3db2b4aff1f6e6ad.jpg b/data/2025/2504_07xxx/2504.07448/images/7047bc959650d5b78d0f91b4475498ec6d1bd0f2863f445c3db2b4aff1f6e6ad.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ec5a8d3fc864badbd1601b0ceab78cff39762c0a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/7047bc959650d5b78d0f91b4475498ec6d1bd0f2863f445c3db2b4aff1f6e6ad.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c21ce8f7c48544f52131daf3b1d38ba9fb2935893bdfdef574c1806075566416 +size 12238 diff --git a/data/2025/2504_07xxx/2504.07448/images/76404041c0f0201eb41da8a2571b926d7fc6c696f21dd849b8ed8f5ef3dab48a.jpg b/data/2025/2504_07xxx/2504.07448/images/76404041c0f0201eb41da8a2571b926d7fc6c696f21dd849b8ed8f5ef3dab48a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4466ff5e374890bc4c2072d4480a8d146c340105 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/76404041c0f0201eb41da8a2571b926d7fc6c696f21dd849b8ed8f5ef3dab48a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a92a1fb5f025e24e6958ced049121ff90352aba6f4ea0c904128be16321c6d9a +size 71973 diff --git a/data/2025/2504_07xxx/2504.07448/images/83373ae876f24c07be676bc904237ed38a4c1ac6f91be338af486c4a228dd6ab.jpg b/data/2025/2504_07xxx/2504.07448/images/83373ae876f24c07be676bc904237ed38a4c1ac6f91be338af486c4a228dd6ab.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e3aa641c5684c991f57652589ed919e990aa82a5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/83373ae876f24c07be676bc904237ed38a4c1ac6f91be338af486c4a228dd6ab.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb3b54fa47f9fb784e6cd80eedc34762255e9eb97e461b1d2a07f3f924671891 +size 55281 diff --git a/data/2025/2504_07xxx/2504.07448/images/887e55d4ad89163bde79c57cc159c33e8bdf357b9e42a7aaa489f09393a10f65.jpg b/data/2025/2504_07xxx/2504.07448/images/887e55d4ad89163bde79c57cc159c33e8bdf357b9e42a7aaa489f09393a10f65.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b0061eff514aa0403658bd527afa8c4e1f9555d2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/887e55d4ad89163bde79c57cc159c33e8bdf357b9e42a7aaa489f09393a10f65.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d4bc1ca1d56bd5442998096a66e2123cd165d14f0dc7e010d2a869aa9fa35f9 +size 7845 diff --git a/data/2025/2504_07xxx/2504.07448/images/8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg b/data/2025/2504_07xxx/2504.07448/images/8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..593f868920de030fe2834c49687b4ac1df0c0a7a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55d642ed3425fd0e793567c0040f74d72a96bcacb0dee10926f92cba2e65f4b5 +size 54152 diff --git a/data/2025/2504_07xxx/2504.07448/images/8e018c516640803a315887d386a51f0ed1a9aa1e20c0fafea96beb17d736aeb0.jpg b/data/2025/2504_07xxx/2504.07448/images/8e018c516640803a315887d386a51f0ed1a9aa1e20c0fafea96beb17d736aeb0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ec9e872a067db5a1af41ed7b609ee2307022ac10 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/8e018c516640803a315887d386a51f0ed1a9aa1e20c0fafea96beb17d736aeb0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52d46a6e22e1c0d5d460e1356fd1c546e4d4ec6e0c5197c041c4a02ee0df4336 +size 35933 diff --git a/data/2025/2504_07xxx/2504.07448/images/915052e338145f3989f7a1419f8bea87e5b17317d21f547392d7f553b85db868.jpg b/data/2025/2504_07xxx/2504.07448/images/915052e338145f3989f7a1419f8bea87e5b17317d21f547392d7f553b85db868.jpg new file mode 100644 index 0000000000000000000000000000000000000000..54dec038cbf5deb37e73547bed40a6cb852211dd --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/915052e338145f3989f7a1419f8bea87e5b17317d21f547392d7f553b85db868.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:636b803bb7a58515ec8133e402cb130db14b77f2ce2148cef38dffd20c6cf6c8 +size 4676 diff --git a/data/2025/2504_07xxx/2504.07448/images/95b5585d57d1e8c4b2aa06c223ba0e4c19ae6336942774341321ecc831698689.jpg b/data/2025/2504_07xxx/2504.07448/images/95b5585d57d1e8c4b2aa06c223ba0e4c19ae6336942774341321ecc831698689.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9939b57642639e673a1a004d1c6d59c31609fcd3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/95b5585d57d1e8c4b2aa06c223ba0e4c19ae6336942774341321ecc831698689.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f44ab243d644fc3c2770af619e5e030c05d72e41f04a0641ebbc151750e58d2 +size 9427 diff --git a/data/2025/2504_07xxx/2504.07448/images/99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg b/data/2025/2504_07xxx/2504.07448/images/99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg new file mode 100644 index 0000000000000000000000000000000000000000..594f9f4440a973f9b6a85de6b9fc367f07e819b4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c84e8bcbe41352a9608c8680ad95eadf7eb469fef7768d9b69f105835d809472 +size 12771 diff --git a/data/2025/2504_07xxx/2504.07448/images/9c9dd3534fb8ab88ff1d79ab0f5a7a4b19e18e497f8aaf38ff907498b88bc0be.jpg b/data/2025/2504_07xxx/2504.07448/images/9c9dd3534fb8ab88ff1d79ab0f5a7a4b19e18e497f8aaf38ff907498b88bc0be.jpg new file mode 100644 index 0000000000000000000000000000000000000000..00753f512b11a15fa92116841a0c0eea99623ce1 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/9c9dd3534fb8ab88ff1d79ab0f5a7a4b19e18e497f8aaf38ff907498b88bc0be.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ee15eff85820a8ab4ba4e8b242cf0c2df9655f582dc7b47a11f405ccd715208 +size 93586 diff --git a/data/2025/2504_07xxx/2504.07448/images/9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg b/data/2025/2504_07xxx/2504.07448/images/9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fc4d4b77863c85e8da4800122c9b712c977f94c7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa4f617f5229c4833826f3f5a83f0f2bccb90f6b70103e705cbc7bd86594db7b +size 49159 diff --git a/data/2025/2504_07xxx/2504.07448/images/a518001fc0b6141aca0a4e7a8419b0ff16c8c644e5d5da0e35c233c641cf8281.jpg b/data/2025/2504_07xxx/2504.07448/images/a518001fc0b6141aca0a4e7a8419b0ff16c8c644e5d5da0e35c233c641cf8281.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b5496a4277a2dc06ff2ff888e64eb40e0a0e9418 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/a518001fc0b6141aca0a4e7a8419b0ff16c8c644e5d5da0e35c233c641cf8281.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c2f1c85c85bfcaebbb06896b9b8777ccd0c6eb1311d40e028fcc2538ad6a7f2 +size 4588 diff --git a/data/2025/2504_07xxx/2504.07448/images/a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg b/data/2025/2504_07xxx/2504.07448/images/a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..94b170c3266952d88c43d271595200d2505dba66 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3927b4539176ff2ce92b7206f08eb930bfab9fac7ed22f57e00689da61707a7d +size 27435 diff --git a/data/2025/2504_07xxx/2504.07448/images/bcecb925b480f17b7a5d22c03c23ec8dfce0886aed9b4aa7b0d70110ed4695d0.jpg b/data/2025/2504_07xxx/2504.07448/images/bcecb925b480f17b7a5d22c03c23ec8dfce0886aed9b4aa7b0d70110ed4695d0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3e1c073f08169e1355a5f8355819bd96535d0791 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/bcecb925b480f17b7a5d22c03c23ec8dfce0886aed9b4aa7b0d70110ed4695d0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ebc2bc1aad668236bd34f547d1585c579b4b250c0349fad9409300dfb6b1f8d +size 39760 diff --git a/data/2025/2504_07xxx/2504.07448/images/bdd09b57284eea17153f4ed2273ef3fe864860dc04f9683f92fb0b576fccca3a.jpg b/data/2025/2504_07xxx/2504.07448/images/bdd09b57284eea17153f4ed2273ef3fe864860dc04f9683f92fb0b576fccca3a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ee75a09df1c2d9b31c35bcc8f1649765e4e5da90 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/bdd09b57284eea17153f4ed2273ef3fe864860dc04f9683f92fb0b576fccca3a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01aca0960fc6328ea31f9583a1083dfdf62dea68ea9012680ec145395f136e48 +size 15064 diff --git a/data/2025/2504_07xxx/2504.07448/images/c7781c2ddb543e490b58ea53ffcfbe9d05d098b5eb71dd9d376b6fa459d7b8bf.jpg b/data/2025/2504_07xxx/2504.07448/images/c7781c2ddb543e490b58ea53ffcfbe9d05d098b5eb71dd9d376b6fa459d7b8bf.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ecba6db7bc7dfa01944c00b8d8be92c87a0634a9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/c7781c2ddb543e490b58ea53ffcfbe9d05d098b5eb71dd9d376b6fa459d7b8bf.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7adde791bc42e02c2bc98706ada092e646aecbea500bd3e64c1e23512206e494 +size 13964 diff --git a/data/2025/2504_07xxx/2504.07448/images/c7ed5a53c1b7e2f2aed88b13b5470bca2d55f38fd8dc214c3eb9192c77c5cf11.jpg b/data/2025/2504_07xxx/2504.07448/images/c7ed5a53c1b7e2f2aed88b13b5470bca2d55f38fd8dc214c3eb9192c77c5cf11.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ccee009bb5e2d93037e49d2a03e2d3b487c47318 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/c7ed5a53c1b7e2f2aed88b13b5470bca2d55f38fd8dc214c3eb9192c77c5cf11.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfd520aab2c6f31bfcbc4f456c4d833c8f72b304fde449e8090864ae6d8fdf61 +size 36756 diff --git a/data/2025/2504_07xxx/2504.07448/images/ca6aeb4790d9f9c505b0ee6c3708e572fffbdc9654a9148680d7bad7e634f703.jpg b/data/2025/2504_07xxx/2504.07448/images/ca6aeb4790d9f9c505b0ee6c3708e572fffbdc9654a9148680d7bad7e634f703.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e81863bf08f52612ddaccf2ba41c400b34630b86 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/ca6aeb4790d9f9c505b0ee6c3708e572fffbdc9654a9148680d7bad7e634f703.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:656d8bcd6d24b3210b6dcddbf8bd1e6d9402c97cb5614d96a31994f6de86a72b +size 8543 diff --git a/data/2025/2504_07xxx/2504.07448/images/cc48563b293a51b992331ffe8bfca654694a8f3286cd115d964d729a9d97b698.jpg b/data/2025/2504_07xxx/2504.07448/images/cc48563b293a51b992331ffe8bfca654694a8f3286cd115d964d729a9d97b698.jpg new file mode 100644 index 0000000000000000000000000000000000000000..57601d821fd55b7468141231412f1b01af9263c3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/cc48563b293a51b992331ffe8bfca654694a8f3286cd115d964d729a9d97b698.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f7deb23354aaee4ab21da6b4c9c0c57f149df695406efacbed196c9503447ee +size 5168 diff --git a/data/2025/2504_07xxx/2504.07448/images/d070a3c1b9f7ec4b03797f93dae14ef188a5af61d1ac4e5037057f00332a5fe2.jpg b/data/2025/2504_07xxx/2504.07448/images/d070a3c1b9f7ec4b03797f93dae14ef188a5af61d1ac4e5037057f00332a5fe2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9d69c5872d8c3310cdcead60a1fb76b342d3b56f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/d070a3c1b9f7ec4b03797f93dae14ef188a5af61d1ac4e5037057f00332a5fe2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a211f791a54f143a8f8e1097c549eb343b10303a3dbcad8b454c30933171af94 +size 80835 diff --git a/data/2025/2504_07xxx/2504.07448/images/dea191fa48023f37272a1a49db3c1212719759aabc6031117e5a8d2063f6b2fd.jpg b/data/2025/2504_07xxx/2504.07448/images/dea191fa48023f37272a1a49db3c1212719759aabc6031117e5a8d2063f6b2fd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6d94e4e4613fca9ea519a163fdefe700b705a2d7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/dea191fa48023f37272a1a49db3c1212719759aabc6031117e5a8d2063f6b2fd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f572364f0e8c9d471883342b98a00faca56202a880a0cfa74dadffb818fd8890 +size 88915 diff --git a/data/2025/2504_07xxx/2504.07448/images/decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg b/data/2025/2504_07xxx/2504.07448/images/decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9748fb243308a86712502dc17bdfe9161db85bdd --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83cd0d599d267bf529b422ebdad006803e855f31f527d6723adcdeaba6296482 +size 45363 diff --git a/data/2025/2504_07xxx/2504.07448/images/df2db27ced015225db70179c581e419f0d47043e07a3ed6e710165c4c3fddaa2.jpg b/data/2025/2504_07xxx/2504.07448/images/df2db27ced015225db70179c581e419f0d47043e07a3ed6e710165c4c3fddaa2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..802edec9161244a97fec52935364a9e2098ea95c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/df2db27ced015225db70179c581e419f0d47043e07a3ed6e710165c4c3fddaa2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c4ea8ad242a5d68ccf4b07ae3a664b6c757cffe5c67412dc3bc91fd2ba929ea1 +size 67593 diff --git a/data/2025/2504_07xxx/2504.07448/images/ecba6d86706578988051097c268b9233b21acc95c6390c7a9222cdc015973279.jpg b/data/2025/2504_07xxx/2504.07448/images/ecba6d86706578988051097c268b9233b21acc95c6390c7a9222cdc015973279.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e945b670763eeba3866ac467a708d3c5653af7c1 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/ecba6d86706578988051097c268b9233b21acc95c6390c7a9222cdc015973279.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d56f274743c78c3e3a5991400f253f9376e3ba0f225d1c74f86c0e57373e1a31 +size 7390 diff --git a/data/2025/2504_07xxx/2504.07448/images/f263117ba513fd9176a815e32e7332251515c8cccf9b4d1dd203f5e174f6ace9.jpg b/data/2025/2504_07xxx/2504.07448/images/f263117ba513fd9176a815e32e7332251515c8cccf9b4d1dd203f5e174f6ace9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c27d939f348e32df50fc731d5e7da87e813542e0 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/f263117ba513fd9176a815e32e7332251515c8cccf9b4d1dd203f5e174f6ace9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b655a122e1568085494fadcc8da3cc538c79ecbff6bf021929075404d123bba +size 9034 diff --git a/data/2025/2504_07xxx/2504.07448/images/f880602a047217b3862f3cabe79e6da7bcf3dc974df10a60d32fcc512581142f.jpg b/data/2025/2504_07xxx/2504.07448/images/f880602a047217b3862f3cabe79e6da7bcf3dc974df10a60d32fcc512581142f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a0d5c6e62c0a65f11e2d5faee0bec657452513c3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/images/f880602a047217b3862f3cabe79e6da7bcf3dc974df10a60d32fcc512581142f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db7630c2ecd41333c3cd94271c463c4bab19ade754fd817620c9cc7818c97514 +size 58037 diff --git a/data/2025/2504_07xxx/2504.07448/layout.json b/data/2025/2504_07xxx/2504.07448/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..6c2797f2f0c302bf0bfc719129649eb206499596 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07448/layout.json @@ -0,0 +1,16840 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 78, + 504, + 111 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 78, + 504, + 111 + ], + "spans": [ + { + "bbox": [ + 105, + 78, + 504, + 111 + ], + "type": "text", + "content": "LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "spans": [ + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "text", + "content": "Juzheng Zhang" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "text", + "content": ", Jiacheng You" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "text", + "content": ", Ashwinee Panda" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "text", + "content": ", Tom Goldstein" + }, + { + "bbox": [ + 111, + 131, + 459, + 144 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "spans": [ + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "type": "text", + "content": "University of Maryland " + }, + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 111, + 144, + 332, + 157 + ], + "type": "text", + "content": "Tsinghua University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 280, + 185, + 330, + 198 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 185, + 330, + 198 + ], + "spans": [ + { + "bbox": [ + 280, + 185, + 330, + 198 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "spans": [ + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "text", + "content": "Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices " + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "text", + "content": " as random projections and sparsifies the matrices " + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "text", + "content": " using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to " + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 140, + 210, + 470, + 397 + ], + "type": "text", + "content": " fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 418, + 195, + 431 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 418, + 195, + 431 + ], + "spans": [ + { + "bbox": [ + 105, + 418, + 195, + 431 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 443, + 506, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 443, + 506, + 567 + ], + "spans": [ + { + "bbox": [ + 104, + 443, + 506, + 567 + ], + "type": "text", + "content": "Large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2023) have transformed deep learning, showcasing remarkable capabilities across various domains. However, their deployment remains computationally demanding, particularly when fine-tuning is required to adapt to downstream tasks or align with human preferences. To mitigate the high resource costs, researchers have developed a range of parameter-efficient fine-tuning (PEFT) techniques. Among these techniques, LoRA (Hu et al., 2021) has gained widespread adoption due to its compelling balance of performance and efficiency. Nevertheless, LoRA still introduces notable memory overhead, particularly in large-scale models. Consequently, recent research has focused on further optimizing LoRA by reducing the number of trainable parameters without compromising performance (Kopiczko et al., 2023; Ding et al., 2023; Zhang et al., 2023b)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": "Recent studies (Yu et al., 2024; Panda et al., 2024) have shown that delta parameters – the differences between fine-tuned and pretrained model weights – exhibit significant redundancy. Furthermore, previous works (Zhang et al., 2023b; Zhu et al., 2024) have observed that freezing matrices " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " in LoRA often achieves comparable performance to training them. Motivated by these findings, we propose LoRA with Reduced Interference (LoRI). LoRI keeps matrices " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " fixed as random projections, while training matrices " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " using task-specific sparse masks. To retain the most critical elements of " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": ", LoRI performs a calibration process to extract sparse masks by selecting the highest-magnitude elements across all layers and projections. As shown in Figure 1(a), LoRI maintains performance even with " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " sparsity in " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " while keeping " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " frozen. This demonstrates that adaptation does not require updating " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": ", and that " + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 570, + 506, + 715 + ], + "type": "text", + "content": " has considerable redundancy. By applying more constrained updates than LoRA, LoRI significantly reduces the number of trainable parameters while better preserving the pretrained model's knowledge during adaptation." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 14, + 223, + 37, + 567 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 223, + 37, + 567 + ], + "spans": [ + { + "bbox": [ + 14, + 223, + 37, + 567 + ], + "type": "text", + "content": "arXiv:2504.07448v2 [cs.LG] 2 Aug 2025" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 121, + 721, + 274, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 721, + 274, + 732 + ], + "spans": [ + { + "bbox": [ + 121, + 721, + 274, + 732 + ], + "type": "text", + "content": "Correspondence to: juzheng@umd.edu." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 115, + 83, + 500, + 199 + ], + "blocks": [ + { + "bbox": [ + 115, + 83, + 500, + 199 + ], + "lines": [ + { + "bbox": [ + 115, + 83, + 500, + 199 + ], + "spans": [ + { + "bbox": [ + 115, + 83, + 500, + 199 + ], + "type": "image", + "image_path": "007808f5857139c08bd5f92f5d6236e77444fe95cba69227193b1d3c7308caee.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "lines": [ + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": "Figure 1: (a) Varying sparsity ratios in matrices " + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": " while freezing " + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": ". Performance remains stable even at " + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": " sparsity in matrices " + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": ". (b) Merging three adapters via weighted averaging. LoRA suffers degradation due to parameter interference, while LoRI preserves task performance. (c) Continual learning from Safety to NLU. LoRA suffers from catastrophic forgetting, while LoRI retains safety alignment. Results for NLU are averaged over eight tasks. GSM8K accuracy (Math), HumanEval pass@10 (Code), and HEx-PHI refusal rate (Safety) are reported individually. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 206, + 504, + 277 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 291, + 506, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 291, + 506, + 458 + ], + "spans": [ + { + "bbox": [ + 104, + 291, + 506, + 458 + ], + "type": "text", + "content": "Multi-task learning is essential for enabling versatile models with multi-task capabilities, which is traditionally performed via joint training on a combination of task-specific datasets (Caruana, 1997; Sener & Koltun, 2018). However, training large models on this data mixture is prohibitively expensive in terms of time and compute. Model merging is a training-free alternative for building powerful models by combining existing ones (Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). This approach is well-suited for merging LoRA adapters, enabling multi-task capabilities within a single model during inference (Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024). However, as shown in Figure 1(b), directly merging heterogeneous LoRAs often results in parameter interference, leading to degraded performance compared to single-task LoRAs. Additionally, many existing merging methods require trial-and-error to identify the optimal method for a specific combination of tasks. LoRI addresses these challenges by using fixed, randomly initialized projection " + }, + { + "bbox": [ + 104, + 291, + 506, + 458 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 291, + 506, + 458 + ], + "type": "text", + "content": ", which maps task-specific adapters into approximately orthogonal subspaces. This reduces interference when merging multiple adapters. In addition, LoRI enables adapter merging without manual selection of merging methods." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 462, + 504, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 462, + 504, + 594 + ], + "spans": [ + { + "bbox": [ + 104, + 462, + 504, + 594 + ], + "type": "text", + "content": "Beyond multi-tasking, safety-critical scenarios require that each newly introduced adapter enhances model capabilities while preserving the safety alignment of the pretrained base model (Qi et al., 2023). LoRI provides a lightweight continual learning approach for adapting models while preserving safety, where training is performed sequentially across tasks (Lopez-Paz & Ranzato, 2017; Wu et al., 2022; Ouyang et al., 2022). The strategy involves first fine-tuning an adapter on safety data to establish alignment, followed by separate adaptation to each downstream task. However, as illustrated in Figure 1(c), continual learning often leads to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), wherein the adaptation to new tasks substantially compromises previously acquired knowledge. LoRI mitigates forgetting by leveraging the sparsity of projection " + }, + { + "bbox": [ + 104, + 462, + 504, + 594 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 462, + 504, + 594 + ], + "type": "text", + "content": " through task-specific masks. This isolation of parameter updates across tasks facilitates continual learning with minimal interference, preserving both safety and task effectiveness." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "content": "To evaluate the effectiveness of LoRI, we conduct extensive experiments across a diverse suite of benchmarks spanning natural language understanding (NLU), mathematical reasoning, code generation, and safety alignment tasks. Using Llama-3-8B and Mistral-7B as base models, our results show that LoRI achieves performance comparable to - or better than - full fine-tuning (FFT), LoRA, and other PEFT methods, while using up to " + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "content": " fewer trainable parameters than LoRA. Notably, LoRI with " + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "content": " sparsity in " + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "content": " surpasses LoRA by " + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "inline_equation", + "content": "17.3\\%" + }, + { + "bbox": [ + 104, + 600, + 506, + 733 + ], + "type": "text", + "content": " on HumanEval with Llama-3. Beyond single-task adaptation, we evaluate LoRI in multi-task settings, including adapter merging and continual learning scenarios. Concatenated merging of LoRI adapters consistently outperforms LoRA adapters overall, closely matching the performance of single-task LoRA baseline. In continual learning, LoRI significantly outperforms LoRA in mitigating catastrophic forgetting of safety alignment, while maintaining strong performance on downstream tasks." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 81, + 233, + 178 + ], + "blocks": [ + { + "bbox": [ + 111, + 81, + 233, + 178 + ], + "lines": [ + { + "bbox": [ + 111, + 81, + 233, + 178 + ], + "spans": [ + { + "bbox": [ + 111, + 81, + 233, + 178 + ], + "type": "image", + "image_path": "99c21a09e320e0a352dbdfe22541f16a85c0b86983910e3f93e2beb03b3a36e4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 137, + 183, + 206, + 194 + ], + "lines": [ + { + "bbox": [ + 137, + 183, + 206, + 194 + ], + "spans": [ + { + "bbox": [ + 137, + 183, + 206, + 194 + ], + "type": "text", + "content": "(a) LoRI method." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 242, + 80, + 367, + 178 + ], + "blocks": [ + { + "bbox": [ + 242, + 80, + 367, + 178 + ], + "lines": [ + { + "bbox": [ + 242, + 80, + 367, + 178 + ], + "spans": [ + { + "bbox": [ + 242, + 80, + 367, + 178 + ], + "type": "image", + "image_path": "2c220aa1e804e1e987d6e39cec73c1e11728da9de51a27e30e19b0b8fd4b34a9.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 269, + 182, + 341, + 195 + ], + "lines": [ + { + "bbox": [ + 269, + 182, + 341, + 195 + ], + "spans": [ + { + "bbox": [ + 269, + 182, + 341, + 195 + ], + "type": "text", + "content": "(b) LoRI merging." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 372, + 80, + 504, + 179 + ], + "blocks": [ + { + "bbox": [ + 372, + 80, + 504, + 179 + ], + "lines": [ + { + "bbox": [ + 372, + 80, + 504, + 179 + ], + "spans": [ + { + "bbox": [ + 372, + 80, + 504, + 179 + ], + "type": "image", + "image_path": "621a497f2b234a3394733086b28a12dee2b8e030d8bf96f6caeb066368484c15.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 383, + 182, + 494, + 195 + ], + "lines": [ + { + "bbox": [ + 383, + 182, + 494, + 195 + ], + "spans": [ + { + "bbox": [ + 383, + 182, + 494, + 195 + ], + "type": "text", + "content": "(c) LoRI continual learning." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "lines": [ + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "spans": [ + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "text", + "content": "Figure 2: Overview of the proposed LoRI method. (a) LoRI freezes the projection matrices " + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "text", + "content": " and sparsely updates " + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "text", + "content": " using task-specific masks " + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "inline_equation", + "content": "M_{t}" + }, + { + "bbox": [ + 104, + 202, + 504, + 245 + ], + "type": "text", + "content": ". (b) LoRI enables adapter merging of multiple task-specific adapters with reduced parameter interference. (c) LoRI builds safety adapters by continual learning with reduced catastrophic forgetting." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 265, + 170, + 277 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 265, + 170, + 277 + ], + "spans": [ + { + "bbox": [ + 105, + 265, + 170, + 277 + ], + "type": "text", + "content": "2 Method" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 292, + 373, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 292, + 373, + 304 + ], + "spans": [ + { + "bbox": [ + 104, + 292, + 373, + 304 + ], + "type": "text", + "content": "2.1 Freezing Low-Rank Projections with Sparse Masking" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "content": "Freezing Projection " + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "content": ". LoRA (Hu et al., 2021) fine-tunes a weight update matrix as a product of two low-rank matrices to adapt LLMs to new tasks. Formally, for a specific task " + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "content": ", given a pretrained weight matrix " + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "inline_equation", + "content": "W_0 \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}" + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "content": ", the weight update " + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "inline_equation", + "content": "\\Delta_t \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times d_{\\mathrm{out}}}" + }, + { + "bbox": [ + 104, + 312, + 504, + 361 + ], + "type": "text", + "content": " is constrained to a low-rank decomposition:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 233, + 366, + 504, + 380 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 366, + 504, + 380 + ], + "spans": [ + { + "bbox": [ + 233, + 366, + 504, + 380 + ], + "type": "interline_equation", + "content": "h = x W _ {0} + x \\Delta_ {t} = x W _ {0} + x A _ {t} B _ {t}. \\tag {1}", + "image_path": "a518001fc0b6141aca0a4e7a8419b0ff16c8c644e5d5da0e35c233c641cf8281.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "spans": [ + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "A_{t} \\in \\mathbb{R}^{d_{\\mathrm{in}} \\times r}" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "B_{t} \\in \\mathbb{R}^{r \\times d_{\\mathrm{out}}}" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "r \\ll \\min\\{d_{\\mathrm{in}}, d_{\\mathrm{out}}\\}" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": ". We denote " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "\\Delta_t" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": " as the LoRA adapter for task " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": ". In practice, LoRA adapters are typically applied to multiple projection matrices (e.g., " + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "inline_equation", + "content": "W_q, W_v" + }, + { + "bbox": [ + 104, + 387, + 504, + 422 + ], + "type": "text", + "content": ") within each transformer layer." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": "Typically, the low-rank projection matrices " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " and the low-rank expansion matrices " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " are updated via gradient descent. Matrices " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " are usually initialized with Kaiming Uniform distribution (He et al., 2015), while matrices " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " are initialized to zero, ensuring that " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "\\Delta_{t} = 0" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " at the start of training. However, in LoRI, we fix " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " as random projections, meaning that the model only learns how to combine the fixed subspace via " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": ". By freezing " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": ", we eliminate the need to store their gradients and optimizer states, thereby reducing memory consumption. During inference, similar to LoRA, LoRI merges the low-rank updates by adding " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "A_{t}B_{t}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "inline_equation", + "content": "W_{0}" + }, + { + "bbox": [ + 104, + 426, + 506, + 517 + ], + "type": "text", + "content": ", ensuring no additional inference latency compared to full fine-tuning." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "spans": [ + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": "Sparse Masking for Projection " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ". LoRI freezes matrices " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " and selectively updates only the most relevant parameters in " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " for each task, as illustrated in Figure 2(a). For task " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ", it first extracts sparse masks " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "M_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " through a calibration process, then applies the masks to constrain training to a limited subset of parameters in " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ". During mask calibration, LoRI updates " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " without masking using a calibration dataset " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t^C" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ", sampled from the adaptation dataset " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ". After this phase, LoRI collects all " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " matrices from the model across layers and projections. Then it computes a global threshold " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "\\tau_t" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ", defined as the " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "s\\%" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " quantile of the absolute values of all elements from these matrices, where " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " is the sparsity ratio. For each matrix " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": ", the corresponding sparse mask " + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "inline_equation", + "content": "M_{t}" + }, + { + "bbox": [ + 104, + 529, + 506, + 631 + ], + "type": "text", + "content": " is computed as:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 181, + 638, + 504, + 658 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 638, + 504, + 658 + ], + "spans": [ + { + "bbox": [ + 181, + 638, + 504, + 658 + ], + "type": "interline_equation", + "content": "M _ {t} = \\mathbb {I} \\left(\\left| B _ {t} \\right| \\geq \\tau_ {t}\\right), \\quad \\text {w h e r e} \\quad \\tau_ {t} = \\operatorname {Q u a n t i l e} _ {s} \\left(\\bigcup \\left| B _ {t} \\right|\\right). \\tag {2}", + "image_path": "ecba6d86706578988051097c268b9233b21acc95c6390c7a9222cdc015973279.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbb{I}(\\cdot)" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " denotes the indicator function applied element-wise. This ensures that only the top- " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "(1 - s)\\%" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " of parameters (by magnitude) across all layers and projections are retained. The masks can also be derived using gradient-based measures such as the Fisher information matrix (Guo et al., 2023; Iurada et al., 2025) or SNIP score (Lee et al., 2018). However, these methods capture local sensitivity at a specific training step, whereas magnitude reflects cumulative importance over the entire fine-tuning process." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 138 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 138 + ], + "type": "text", + "content": "It is well established that the importance of projection matrices varies significantly across different layers and projections (Zhang et al., 2023a;d; Kopiczko et al., 2023). Our masking strategy enables global comparison of parameters and facilitates effective allocation of the parameter budget determined by the sparsity ratio. Notably, the masks for each task " + }, + { + "bbox": [ + 104, + 82, + 504, + 138 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 82, + 504, + 138 + ], + "type": "text", + "content": " are calibrated only once and can be reused as needed." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "spans": [ + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": "After mask calibration, LoRI resets " + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": " to zero and trains on the adaptation dataset " + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t" + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": ", with updates restricted to the masked parameters. The LoRI adapter is expressed as " + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "inline_equation", + "content": "\\Delta_t = A_t(B_t \\odot M_t)" + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": ". The algorithm of LoRI is detailed in Appendix B. In practice, the sparsity ratio " + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": " can reach up to 90%, meaning that only a small fraction of parameters in matrices " + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 143, + 506, + 244 + ], + "type": "text", + "content": " are updated, while the majority remain unchanged. This selective adaptation enables the model to focus on modifying the most critical parameters needed for specific tasks, while preserving the foundational knowledge encoded in the pretrained base model. In the limiting case of a single task and zero sparsity, our method reduces to LoRA-FA (Zhang et al., 2023b), which has been shown to perform competitively with standard LoRA." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 256, + 409, + 269 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 256, + 409, + 269 + ], + "spans": [ + { + "bbox": [ + 104, + 256, + 409, + 269 + ], + "type": "text", + "content": "2.2 Reducing Interference in Adapter Merging via Orthogonality" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "spans": [ + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "text", + "content": "Orthogonality of LoRI Adapters. A central challenge in adapter merging is parameter interference, where combining multiple adapters leads to degraded performance due to conflicting parameter updates. Given a set of trained LoRI adapters " + }, + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "inline_equation", + "content": "\\{\\Delta_1,\\Delta_2,\\dots ,\\Delta_T\\}" + }, + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "text", + "content": ", the goal is to construct a unified model that combines knowledge from all tasks with minimal interference, as illustrated in Figure 2(b). Formally, we define the excess loss due to parameter interference for a specific task " + }, + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 277, + 504, + 344 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 227, + 347, + 504, + 361 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 347, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 227, + 347, + 504, + 361 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {t} = \\mathcal {L} _ {t} \\left(W _ {\\text {m e r g e}}\\right) - \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\tag {3}", + "image_path": "cc48563b293a51b992331ffe8bfca654694a8f3286cd115d964d729a9d97b698.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "spans": [ + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "W_{\\mathrm{merge}}" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " is the merged model, " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "W_0" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " is the pretrained weight matrix, " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "\\Delta_t" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " is the LoRI adapter for task " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "\\alpha_t \\in \\mathbb{R}" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " is a scalar weight, and " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_t" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " is the loss function for task " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": ". A high " + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_t" + }, + { + "bbox": [ + 104, + 364, + 504, + 399 + ], + "type": "text", + "content": " indicates significant interference." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 403, + 504, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 403, + 504, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 403, + 504, + 437 + ], + "type": "text", + "content": "LoRI mitigates this interference by leveraging approximate orthogonality, achieved by freezing the projection matrices " + }, + { + "bbox": [ + 104, + 403, + 504, + 437 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 403, + 504, + 437 + ], + "type": "text", + "content": " as independent random matrices. This design leads to the following property, whose proof is provided in Appendix C:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": "Property 1. Let " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "A_s, A_t \\in \\mathbb{R}^{d_{in} \\times r}" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": " be independent random matrices with i.i.d. entries drawn from a Kaiming Uniform distribution for distinct tasks " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "s \\neq t" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": ". Let their corresponding LoRI adapters be " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "\\Delta_s = A_s(B_s \\odot M_s)" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "\\Delta_t = A_t(B_t \\odot M_t)" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": ", where the trained matrices " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "(B_s \\odot M_s)" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "(B_t \\odot M_t)" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": " have finite Frobenius norms. Under the condition that " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "r \\ll d_{in}" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": ", as the input dimension " + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "inline_equation", + "content": "d_{in} \\to \\infty" + }, + { + "bbox": [ + 104, + 441, + 504, + 498 + ], + "type": "text", + "content": ", the adapters are approximately orthogonal:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 241, + 501, + 504, + 515 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 241, + 501, + 504, + 515 + ], + "spans": [ + { + "bbox": [ + 241, + 501, + 504, + 515 + ], + "type": "interline_equation", + "content": "\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\rightarrow 0 \\quad i n p r o b a b i l i t y. \\tag {4}", + "image_path": "915052e338145f3989f7a1419f8bea87e5b17317d21f547392d7f553b85db868.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 524, + 504, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 524, + 504, + 559 + ], + "spans": [ + { + "bbox": [ + 104, + 524, + 504, + 559 + ], + "type": "text", + "content": "We describe two merging methods: concatenated merging (weighted averaging) and linear merging (Task Arithmetic) (Ilharco et al., 2022), both of which exploit the approximate orthogonality of LoRIs." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "spans": [ + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "text", + "content": "Concatenated Merging (Weighted Averaging). This method constructs the merged model by creating a weighted sum of individual task adapters. This is achieved by concatenating the weighted " + }, + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "text", + "content": " and masked " + }, + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 570, + 504, + 604 + ], + "type": "text", + "content": " matrices:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 148, + 609, + 504, + 632 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 148, + 609, + 504, + 632 + ], + "spans": [ + { + "bbox": [ + 148, + 609, + 504, + 632 + ], + "type": "interline_equation", + "content": "A ^ {\\prime} = \\left[ \\alpha_ {1} A _ {1} \\alpha_ {2} A _ {2} \\dots \\alpha_ {T} A _ {T} \\right], \\quad B ^ {\\prime} = \\left[ \\left(B _ {1} \\odot M _ {1}\\right) ^ {\\top}, \\dots , \\left(B _ {T} \\odot M _ {T}\\right) ^ {\\top} \\right] ^ {\\top}, \\tag {5}", + "image_path": "ca6aeb4790d9f9c505b0ee6c3708e572fffbdc9654a9148680d7bad7e634f703.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 635, + 504, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 635, + 504, + 659 + ], + "spans": [ + { + "bbox": [ + 104, + 635, + 504, + 659 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 635, + 504, + 659 + ], + "type": "inline_equation", + "content": "\\alpha_{t} \\in \\mathbb{R}" + }, + { + "bbox": [ + 104, + 635, + 504, + 659 + ], + "type": "text", + "content": " are scalar weights (e.g., uniform or task-prioritized). The final merged model is then formed by adding their product to the base model weights:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 161, + 663, + 504, + 694 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 663, + 504, + 694 + ], + "spans": [ + { + "bbox": [ + 161, + 663, + 504, + 694 + ], + "type": "interline_equation", + "content": "W _ {\\text {m e r g e}} = W _ {0} + A ^ {\\prime} B ^ {\\prime} = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t} \\left(B _ {t} \\odot M _ {t}\\right) = W _ {0} + \\sum_ {t = 1} ^ {T} \\alpha_ {t} \\Delta_ {t}. \\tag {6}", + "image_path": "68f65f00fb4ff91cf0266bf22ab3937f4f4ee0ae1754c3e599988ff0f41544ea.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "content": "By summing approximately orthogonal adapters, we ensure that the updates for each task occupy largely disjoint subspaces, thereby reducing interference (Ilharco et al., 2022; OrtizJimenez et al., 2023; Xiong et al., 2024)." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "text", + "content": "The reduction in interference can be explained by a theoretical sketch based on two key assumptions. The first is the local linearity of the loss landscape (Li et al., 2018), which allows for a first-order Taylor approximation. The second is the gradient alignment assumption, formally expressed as " + }, + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "inline_equation", + "content": "\\nabla \\mathcal{L}_t(W_0 + \\alpha_t\\Delta_t)\\propto \\Delta_t" + }, + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "text", + "content": ". This posits that at a task's solution, the direction of steepest descent is primarily aligned with the adapter updates already made for that task. Under these assumptions, the excess loss " + }, + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_t" + }, + { + "bbox": [ + 104, + 82, + 506, + 161 + ], + "type": "text", + "content": " is approximately the inner product of the gradient and the updates from the other tasks:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 185, + 167, + 504, + 201 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 185, + 167, + 504, + 201 + ], + "spans": [ + { + "bbox": [ + 185, + 167, + 504, + 201 + ], + "type": "interline_equation", + "content": "\\mathcal {I} _ {t} \\approx \\left\\langle \\nabla \\mathcal {L} _ {t} \\left(W _ {0} + \\alpha_ {t} \\Delta_ {t}\\right), \\sum_ {s \\neq t} \\alpha_ {s} \\Delta_ {s} \\right\\rangle_ {F} \\propto \\sum_ {s \\neq t} \\alpha_ {k} \\left\\langle \\Delta_ {t}, \\Delta_ {s} \\right\\rangle_ {F}. \\tag {7}", + "image_path": "95b5585d57d1e8c4b2aa06c223ba0e4c19ae6336942774341321ecc831698689.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "spans": [ + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "text", + "content": "Since Property 1 establishes that " + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "inline_equation", + "content": "\\langle \\Delta_t, \\Delta_s \\rangle_F \\to 0" + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "inline_equation", + "content": "s \\neq t" + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "text", + "content": ", the total interference loss becomes negligible: " + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "inline_equation", + "content": "\\mathcal{I}_t \\approx 0" + }, + { + "bbox": [ + 104, + 205, + 504, + 242 + ], + "type": "text", + "content": ". This heuristic argument provides strong intuition for why concatenated merging is effective, which is then validated by our empirical results." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "text", + "content": "Linear Merging (Task Arithmetic). Alternatively, the merged model can be formed by summing the " + }, + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "text", + "content": " and masked " + }, + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 253, + 504, + 277 + ], + "type": "text", + "content": " matrices independently before multiplication:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 128, + 281, + 504, + 315 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 281, + 504, + 315 + ], + "spans": [ + { + "bbox": [ + 128, + 281, + 504, + 315 + ], + "type": "interline_equation", + "content": "W _ {\\text {m e r g e}} = W _ {0} + \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} A _ {t}\\right) \\left(\\sum_ {t = 1} ^ {T} \\alpha_ {t} \\left(B _ {t} \\odot M _ {t}\\right)\\right) = W _ {0} + \\sum_ {s = 1} ^ {T} \\sum_ {t = 1} ^ {T} \\alpha_ {s} \\alpha_ {t} A _ {s} \\left(B _ {t} \\odot M _ {t}\\right). \\tag {8}", + "image_path": "6e1c12611e7500e7e0ea7220d14c5659b68282ff03540ee717e2913753d782d7.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "spans": [ + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "content": "While concatenated merging directly sums approximately orthogonal adapters, this linear merging approach introduces problematic cross-terms " + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "inline_equation", + "content": "\\alpha_{s}\\alpha_{t}A_{s}(B_{t}\\odot M_{t})" + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "inline_equation", + "content": "s\\neq t" + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "content": ". These terms cause interference because components like " + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "inline_equation", + "content": "\\{A_s(B_t\\odot M_t)\\}_{t = 1}^T" + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "content": " for a fixed " + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 104, + 320, + 504, + 380 + ], + "type": "text", + "content": " are generally not mutually orthogonal. As a result, concatenated merging offers a cleaner and empirically more effective strategy for combining LoRI adapters." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 392, + 393, + 404 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 392, + 393, + 404 + ], + "spans": [ + { + "bbox": [ + 104, + 392, + 393, + 404 + ], + "type": "text", + "content": "2.3 Reducing Interference in Continual Learning via Sparsity" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 412, + 506, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 412, + 506, + 502 + ], + "spans": [ + { + "bbox": [ + 104, + 412, + 506, + 502 + ], + "type": "text", + "content": "Safety-Preserving Adapters. For safety-critical applications, ensuring that new task adaptations do not compromise established safety behaviors is crucial. Therefore, each newly introduced adapter must preserve the base model's safety alignment. A straightforward approach to achieve this is to merge a safety LoRI adapter into the deployed model during every inference. However, as we will show in Section 3.4, this method may be insufficient for scenarios that demand strong safety guarantees. In such cases, as illustrated in Figure 2(c), a more reliable solution is to adopt a two-phase continual learning process for each LoRI adapter to reinforce safety:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 116, + 510, + 504, + 577 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "spans": [ + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "text", + "content": "1. Safety Alignment Phase: Train a LoRI adapter on a curated safety dataset " + }, + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\text{safety}}" + }, + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "text", + "content": ", yielding " + }, + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "inline_equation", + "content": "\\Delta_{\\text{safety}} = A(B_{\\text{safety}} \\odot M_{\\text{safety}})" + }, + { + "bbox": [ + 117, + 510, + 504, + 537 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "spans": [ + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "content": "2. Task Adaptation Phase: Fine-tune " + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "inline_equation", + "content": "\\Delta_{\\mathrm{safety}}" + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "content": " on each task adaptation dataset " + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "inline_equation", + "content": "D_t, t = 1, 2, \\ldots, T" + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "content": ", reusing the calibrated task-specific masks " + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "inline_equation", + "content": "M_t" + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "content": ", resulting in safety-preserving adapters " + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "inline_equation", + "content": "\\Delta_t = A(B_t \\odot M_t)" + }, + { + "bbox": [ + 116, + 540, + 504, + 577 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "spans": [ + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "content": "This method does not require recalibrating masks for each task or performing multiple rounds of continual learning. Notably, we do not enforce non-overlapping masks " + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "inline_equation", + "content": "M_t \\cap M_{\\text{safety}} = \\emptyset" + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "content": ". Enforcing such a constraint would require recalibrating masks after the safety alignment phase due to the reduced parameter space, and could potentially degrade performance on downstream tasks. The expected overlap between sparse masks with " + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "content": " sparsity is theoretically " + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "content": ". Empirically, we find that this expectation holds: the average overlap between task-specific masks is indeed " + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "inline_equation", + "content": "\\sim 1\\%" + }, + { + "bbox": [ + 104, + 584, + 506, + 687 + ], + "type": "text", + "content": ", without explicitly enforcing non-overlap. This slight overlap allows important parameters to be shared across tasks, potentially enabling positive knowledge transfer." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "type": "text", + "content": "Catastrophic Forgetting. Continual learning models are vulnerable to catastrophic forgetting (Li & Hoiem, 2017; Dong et al., 2023; Luo et al., 2023), where updates for new tasks can overwrite and degrade previously learned knowledge. Despite the slight overlap between" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "text", + "content": "task-specific masks, the sparsity in " + }, + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "inline_equation", + "content": "B_{t}" + }, + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "text", + "content": " induced by " + }, + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "inline_equation", + "content": "M_{t}" + }, + { + "bbox": [ + 104, + 82, + 504, + 118 + ], + "type": "text", + "content": " enables LoRI to facilitate isolated parameter updates for safety alignment and task adaptation. As a result, LoRI minimizes cross-task interference and mitigates catastrophic forgetting in safety alignment." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 136, + 195, + 149 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 136, + 195, + 149 + ], + "spans": [ + { + "bbox": [ + 105, + 136, + 195, + 149 + ], + "type": "text", + "content": "3 Experiments" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 222, + 176 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 222, + 176 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 222, + 176 + ], + "type": "text", + "content": "3.1 Experimental Setup" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 186, + 506, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 186, + 506, + 340 + ], + "spans": [ + { + "bbox": [ + 104, + 186, + 506, + 340 + ], + "type": "text", + "content": "Datasets. We conduct a series of experiments to evaluate LoRI's effectiveness on single-task and multi-task settings, including adapter merging and continual learning. We focus on four capabilities: (i) Natural Language Understanding (NLU): LoRI is trained on the aggregation of eight NLU datasets (Hu et al., 2023), including BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SocialIQA (Sap et al., 2019), ARC-Challenge (Clark et al., 2018), ARC-Easy (Clark et al., 2018), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), and Winogrande (Sakaguchi et al., 2021). We evaluate accuracy on the individual test split for each dataset. (ii) Mathematical Reasoning (Math): LoRI is trained on the GSM8K (Cobbe et al., 2021) training split and evaluated on the GSM8K test split. (iii) Code Generation (Code): LoRI is trained on CodeAlpaca (Chaudhary, 2023) and evaluated using pass@1, pass@5, and pass@10 on HumanEval (Chen et al., 2021). (iv) Safety Alignment (Safety): LoRI is trained on Saferpaca (Bianchi et al., 2023), which extends Alpaca-Cleaned (Taori et al., 2023) with 2,000 safety instructions. Safety performance is assessed by measuring the refusal rate on harmful queries from HEX-PHI (Qi et al., 2023)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "spans": [ + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "text", + "content": "Baselines. In single-task experiments, we compare LoRI with full fine-tuning (FFT), LoRA (Hu et al., 2021), and DoRA (Liu et al., 2024). Results for additional PEFT baselines, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), and LoRA+ (Hayou et al., 2024), are available in Appendix E.1. In merging experiments, we compare LoRI merging with several LoRA merging methods, including concatenated merging, linear merging (Ilharco et al., 2022), magnitude pruning, TIES-Merging (Yadav et al., 2023), and DARE (Yu et al., 2024). Magnitude pruning, TIES, and DARE are pruning-based approaches that apply sparsification to the " + }, + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 356, + 506, + 501 + ], + "type": "text", + "content": " matrices before merging, based on a specified density. Magnitude pruning removes low-magnitude parameters; TIES-Merging further merges weights with consistent signs; and DARE performs random pruning followed by rescaling. For fair comparison, all baseline results are reproduced using a consistent experimental setup." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "spans": [ + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": "Implementation Details. We use Llama-3-8B (Grattafori et al., 2024) and Mistral7B (Jiang et al., 2023) as base models. We conduct all experiments on 8 NVIDIA A5000 GPUs. To explore the impact of sparsity, we provide two variants of LoRI: LoRI-D, which uses dense " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": " matrices, and LoRI-S, which applies " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": " sparsity to " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": ". Sparsity is implemented by masking the gradients of " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": " during backpropagation. For optimal performance, we use the entire adaptation dataset as the calibration dataset for each task. Ablation results for calibration are presented in Section 3.5. For consistency, we use the same hyperparameters for PEFT baselines as for LoRI-D. For all adapter merging experiments, uniform weights " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "\\alpha_{t}" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": " are employed across all adapters. The weights " + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "inline_equation", + "content": "\\alpha_{t}" + }, + { + "bbox": [ + 104, + 515, + 506, + 638 + ], + "type": "text", + "content": " are treated as hyperparameters, and their ablation study is detailed in Section 3.5. Detailed hyperparameter settings are provided in Appendix D." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 654, + 246, + 667 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 654, + 246, + 667 + ], + "spans": [ + { + "bbox": [ + 105, + 654, + 246, + 667 + ], + "type": "text", + "content": "3.2 Single-Task Performance" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "text", + "content": "Table 1 presents single-task performance on eight NLU benchmarks, while Table 2 reports single-task performance on the math, code, and safety benchmarks. Results for additional PEFT baselines are available in Appendix E.1. The rank for our experiments is set to " + }, + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "text", + "content": ". We observed stable performance across different ranks, with additional results for " + }, + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 676, + 504, + 734 + ], + "type": "text", + "content": " provided in Appendix E.2." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 122, + 504, + 254 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 504, + 114 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 504, + 114 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 504, + 114 + ], + "type": "text", + "content": "Table 1: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 and Mistral with " + }, + { + "bbox": [ + 104, + 79, + 504, + 114 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 79, + 504, + 114 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 122, + 504, + 254 + ], + "lines": [ + { + "bbox": [ + 107, + 122, + 504, + 254 + ], + "spans": [ + { + "bbox": [ + 107, + 122, + 504, + 254 + ], + "type": "table", + "html": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA84M (1.15%)75.290.182.982.992.088.795.188.186.9
DoRA85M (1.16%)75.890.482.983.392.690.696.387.987.5
LoRI-D44M (0.60%)75.990.683.083.691.988.495.987.487.1
LoRI-S4.4M (0.06%)74.090.182.682.691.590.895.587.586.8
", + "image_path": "dea191fa48023f37272a1a49db3c1212719759aabc6031117e5a8d2063f6b2fd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 143, + 324, + 470, + 479 + ], + "blocks": [ + { + "bbox": [ + 104, + 281, + 504, + 316 + ], + "lines": [ + { + "bbox": [ + 104, + 281, + 504, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 281, + 504, + 316 + ], + "type": "text", + "content": "Table 2: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with " + }, + { + "bbox": [ + 104, + 281, + 504, + 316 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 281, + 504, + 316 + ], + "type": "text", + "content": ". Bold indicates the best-performing method, and underline indicates the second-best." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 324, + 470, + 479 + ], + "lines": [ + { + "bbox": [ + 143, + 324, + 470, + 479 + ], + "spans": [ + { + "bbox": [ + 143, + 324, + 470, + 479 + ], + "type": "table", + "html": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
Mistral-7B
FFT7.24G (100%)55.529.138.540.494.1
LoRA84M (1.15%)57.833.842.445.391.9
DoRA85M (1.16%)57.533.742.646.895.3
LoRI-D44M (0.60%)58.033.842.045.194.7
LoRI-S4.4M (0.06%)57.133.743.648.195.9
", + "image_path": "574b386d7041c8bb5c61c6f9568f32dc489aad22da229933747d96a0c22a481e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "spans": [ + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": "While full fine-tuning (FFT) updates all model parameters, LoRA and DoRA reduce the number of trainable parameters to approximately " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": ". LoRI-D further reduces this to about " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "0.5\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": " by freezing matrices " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": ", and LoRI-S pushes this reduction to " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "0.05\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": " by applying " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": " sparsity to matrices " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": ", achieving a " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": " reduction in trainable parameters compared to LoRA. Despite tuning fewer parameters, LoRI-D and LoRI-S achieve performance comparable to - and even better than - LoRA and DoRA on NLU, math, code, and safety tasks. LoRI-D generally outperforms LoRI-S slightly, due to the extremely limited parameter budget in LoRI-S. Remarkably, LoRI-D and LoRI-S consistently outperform FFT, LoRA, and DoRA on code generation tasks. On HumanEval with Llama-3, LoRI-D achieves a pass@10 score of " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "63.2\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": ", outperforming LoRA by " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "24.4\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": ". LoRI-S achieves " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "59.6\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": " pass@10, exceeding LoRA by " + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "inline_equation", + "content": "17.3\\%" + }, + { + "bbox": [ + 104, + 516, + 506, + 638 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "The strong performance of LoRI-D suggests that effective adaptation can be achieved without updating " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": ", while the strong performance of LoRI-S indicates that " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " contains substantial parameter redundancy. LoRI's performance gains are attributed to the principled use of sparsity, which serves as a strong regularizer during adaptation. Additionally, LoRI preserves latent task-specific knowledge embedded in the pretrained model. This supports the view that supervised fine-tuning (SFT) primarily unlocks capabilities already present in pretrained models, rather than introducing new ones, which is consistent with findings from Liu et al. (2024); Yu et al. (2024)." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 124, + 145, + 488, + 280 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "text", + "content": "Table 3: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 124, + 145, + 488, + 280 + ], + "lines": [ + { + "bbox": [ + 124, + 145, + 488, + 280 + ], + "spans": [ + { + "bbox": [ + 124, + 145, + 488, + 280 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
ConcatLoRA85.057.813.020.022.384.4
LinearLoRA84.854.114.220.823.379.4
MagnitudeLoRA81.950.324.136.742.474.4
TIESLoRA72.624.032.546.351.777.8
DARELoRA79.148.934.148.753.574.1
ConcatLoRI-D83.255.840.556.962.286.6
LinearLoRI-D82.553.840.954.960.385.9
ConcatLoRI-S81.245.234.348.754.084.7
LinearLoRI-S79.141.323.236.642.378.8
", + "image_path": "6f8e6b8bbcc547d245480d4bd82a15987de386e346f1158ef17702d74fdc3063.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 301, + 211, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 301, + 211, + 316 + ], + "spans": [ + { + "bbox": [ + 105, + 301, + 211, + 316 + ], + "type": "text", + "content": "3.3 Adapter Merging" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "spans": [ + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "content": "We consider four heterogeneous tasks for LoRA and LoRI merging: NLU, math, code, and safety. This setting is generally more challenging than merging homogeneous adapters, such as merging multiple NLU adapters. Table 3 presents results for merging LoRAs and LoRIs on these four tasks. For LoRI, we apply concatenated and linear merging to the LoRI-D and LoRI-S variants. Pruning-based methods such as magnitude pruning, TIES, and DARE are not applied to LoRI, since these methods will prune the " + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "content": " matrices as LoRI already sparsifies " + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "content": ", resulting in an inconsistent pruning scheme across " + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 323, + 506, + 425 + ], + "type": "text", + "content": ". Additional results, including experiments on merging three adapters and evaluations of pruning-based methods on LoRI, are provided in Appendix E.4 and E.5." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 428, + 504, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 428, + 504, + 474 + ], + "spans": [ + { + "bbox": [ + 104, + 428, + 504, + 474 + ], + "type": "text", + "content": "As shown in Table 3, directly merging LoRAs results in substantial performance degradation, particularly for code generation and safety alignment. Although pruning-based methods (e.g., DARE, TIES) improve code performance, they often compromise accuracy on other tasks. In contrast, LoRI achieves consistently strong performance across all tasks." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "spans": [ + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "text", + "content": "Concatenated merging with LoRI-D achieves the best overall performance, closely matching the single-task baseline, which indicates minimal interference between LoRI adapters. For instance, it achieves " + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "inline_equation", + "content": "62.2\\%" + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "text", + "content": " pass@10 on HumanEval and an " + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "inline_equation", + "content": "86.6\\%" + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "text", + "content": " refusal rate on HExPHI. Despite using only " + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 104, + 478, + 504, + 545 + ], + "type": "text", + "content": " of the parameters of LoRA, LoRI-S retains competitive performance. Notably, on code and safety tasks, concatenated merging with LoRI-S outperforms all LoRA merging methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 549, + 506, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 549, + 506, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 549, + 506, + 628 + ], + "type": "text", + "content": "Linear merging with LoRI also performs competitively, though it lags slightly behind concatenated merging due to cross-term interactions that introduce some interference. LoRI eliminates the need for manual selection of merging methods: simple concatenated merging yields strong results. The choice between LoRI-D and LoRI-S can then be guided by the desired trade-off between performance and parameter efficiency. We also note an important trade-off between code generation performance and other domains during adapter merging, a phenomenon further explored in Section 3.5." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 643, + 222, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 643, + 222, + 657 + ], + "spans": [ + { + "bbox": [ + 105, + 643, + 222, + 657 + ], + "type": "text", + "content": "3.4 Continual Learning" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "While merging adapters enables multi-task capabilities, it falls short of providing robust safety alignment in scenarios that demand strong safety guarantees. As shown in Table 3, the highest refusal rate on HEx-PHI achieved through LoRA or LoRI merging is " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "86.6\\%" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": ". To address this limitation, we adopt a two-phase training process: first, a safety adapter is trained on the safety alignment dataset Saerpaca; then, it is individually adapted to each downstream task, including NLU, math, and code." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 83, + 504, + 200 + ], + "blocks": [ + { + "bbox": [ + 111, + 83, + 504, + 200 + ], + "lines": [ + { + "bbox": [ + 111, + 83, + 504, + 200 + ], + "spans": [ + { + "bbox": [ + 111, + 83, + 504, + 200 + ], + "type": "image", + "image_path": "8c1d20c92d0e7590d20654db0d23eee565a021dbcb006488d103caa7576dd0a8.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 208, + 504, + 239 + ], + "lines": [ + { + "bbox": [ + 104, + 208, + 504, + 239 + ], + "spans": [ + { + "bbox": [ + 104, + 208, + 504, + 239 + ], + "type": "text", + "content": "Figure 3: Continual learning results from safety to NLU, math, and code domains. Results for NLU are averaged over eight tasks. GSM8K accuracy, HumanEval pass@10, and HEX-PHI refusal rate are reported individually. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 208, + 504, + 239 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 208, + 504, + 239 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 123, + 255, + 297, + 361 + ], + "blocks": [ + { + "bbox": [ + 123, + 255, + 297, + 361 + ], + "lines": [ + { + "bbox": [ + 123, + 255, + 297, + 361 + ], + "spans": [ + { + "bbox": [ + 123, + 255, + 297, + 361 + ], + "type": "image", + "image_path": "162d9ff1efefe62f414fe64facb19cba51d7cd7f30e0907041057071f5acf292.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 309, + 252, + 487, + 361 + ], + "blocks": [ + { + "bbox": [ + 309, + 252, + 487, + 361 + ], + "lines": [ + { + "bbox": [ + 309, + 252, + 487, + 361 + ], + "spans": [ + { + "bbox": [ + 309, + 252, + 487, + 361 + ], + "type": "image", + "image_path": "a9587bb9a047f741a1aad793265a30edeb10f5c174f974a01bc4155d2c385d2f.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 116, + 380, + 291, + 486 + ], + "blocks": [ + { + "bbox": [ + 153, + 368, + 268, + 380 + ], + "lines": [ + { + "bbox": [ + 153, + 368, + 268, + 380 + ], + "spans": [ + { + "bbox": [ + 153, + 368, + 268, + 380 + ], + "type": "text", + "content": "(a) Effect of calibration steps." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 116, + 380, + 291, + 486 + ], + "lines": [ + { + "bbox": [ + 116, + 380, + 291, + 486 + ], + "spans": [ + { + "bbox": [ + 116, + 380, + 291, + 486 + ], + "type": "image", + "image_path": "4736b8e087c9df69fffd2d504fa1bf7f7e710aab4210389b186572a533c25260.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 141, + 491, + 265, + 503 + ], + "lines": [ + { + "bbox": [ + 141, + 491, + 265, + 503 + ], + "spans": [ + { + "bbox": [ + 141, + 491, + 265, + 503 + ], + "type": "text", + "content": "(c) Effect of mask granularities." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 512, + 504, + 533 + ], + "lines": [ + { + "bbox": [ + 104, + 512, + 504, + 533 + ], + "spans": [ + { + "bbox": [ + 104, + 512, + 504, + 533 + ], + "type": "text", + "content": "Figure 4: Ablation studies across different settings. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 512, + 504, + 533 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 512, + 504, + 533 + ], + "type": "text", + "content": ". Additional ablation studies are provided in Appendix F." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 318, + 380, + 495, + 486 + ], + "blocks": [ + { + "bbox": [ + 302, + 368, + 489, + 380 + ], + "lines": [ + { + "bbox": [ + 302, + 368, + 489, + 380 + ], + "spans": [ + { + "bbox": [ + 302, + 368, + 489, + 380 + ], + "type": "text", + "content": "(b) Sparsity ratios across layers and projections." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 318, + 380, + 495, + 486 + ], + "lines": [ + { + "bbox": [ + 318, + 380, + 495, + 486 + ], + "spans": [ + { + "bbox": [ + 318, + 380, + 495, + 486 + ], + "type": "image", + "image_path": "31335e88f00e33e29f1b10efff9ce994dbee1c672b25d3686fd244d2b5189c0e.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 347, + 491, + 466, + 504 + ], + "lines": [ + { + "bbox": [ + 347, + 491, + 466, + 504 + ], + "spans": [ + { + "bbox": [ + 347, + 491, + 466, + 504 + ], + "type": "text", + "content": "(d) Effect of merging weights." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "spans": [ + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": "Figure 3 presents results from these continual learning experiments. LoRA exhibits severe catastrophic forgetting on safety alignment – particularly in the safety " + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": " NLU experiment – likely due to the large size of the NLU training split (" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "inline_equation", + "content": "\\sim 170\\mathrm{k}" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": " examples). Among all methods, LoRI-S achieves the best preservation of safety alignment, even outperforming single-task LoRI-D. This is due to its " + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": " sparsity in the " + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": " matrices, which enables isolated parameter updates between the initial safety alignment and subsequent task adaptations. LoRI-D also shows some resistance to forgetting, benefiting from frozen " + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 553, + 506, + 674 + ], + "type": "text", + "content": " matrices. For task adaptation, LoRI-D generally outperforms LoRI-S, as the latter's aggressive sparsity limits its adaptation capacity. Overall, LoRI offers a lightweight and effective approach to building safety adapters that preserve alignment while supporting adaptation to downstream tasks." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 689, + 208, + 700 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 689, + 208, + 700 + ], + "spans": [ + { + "bbox": [ + 105, + 689, + 208, + 700 + ], + "type": "text", + "content": "3.5 Ablation Studies" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": "Calibration Steps. Calibration steps refer to the number of update steps used to generate sparse masks for each task. Figure 4(a) shows how performance of LoRI-S changes with" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 128 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 128 + ], + "type": "text", + "content": "different numbers of calibration steps on math and code tasks. We observe that performance generally improves as the number of calibration steps increases. Since the masks only need to be calibrated once per task and can be reused, we use the entire adaptation dataset as the calibration dataset to achieve the best performance." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 139, + 506, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 139, + 506, + 219 + ], + "spans": [ + { + "bbox": [ + 104, + 139, + 506, + 219 + ], + "type": "text", + "content": "Sparsity Ratio. We use model-wise masks in our experiments that retain the highest-magnitude parameters across all layers and projections. Figure 4(b) presents the sparsity ratios of different projection types (e.g., up, down, key, value) across layers under a " + }, + { + "bbox": [ + 104, + 139, + 506, + 219 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 139, + 506, + 219 + ], + "type": "text", + "content": " sparsity on GSM8K. We observe that feedforward (FFN) projections tend to retain more parameters (i.e., lower sparsity) than self-attention projections, indicating they are more critical for adaptation. Additionally, the top layers are less sparse than lower layers, suggesting that the top layers play a more important role in adaptation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 228, + 506, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 228, + 506, + 308 + ], + "spans": [ + { + "bbox": [ + 104, + 228, + 506, + 308 + ], + "type": "text", + "content": "Mask Granularity. We compare five levels of mask granularity under " + }, + { + "bbox": [ + 104, + 228, + 506, + 308 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 228, + 506, + 308 + ], + "type": "text", + "content": " sparsity on GSM8K, as shown in Figure 4(c). We compare module-wise, projection-wise, layer-wise, and matrix-wise masking against our model-wise masking, where parameters are selected within progressively smaller scopes. We find that coarse-grained masking (e.g., model-wise) yields the best performance, while fine-grained masking (e.g., matrix-wise) results in degradation. This suggests that global magnitude-based selection enables better parameter allocation, as the importance of projection matrices varies across the model." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 318, + 506, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 318, + 506, + 387 + ], + "spans": [ + { + "bbox": [ + 104, + 318, + 506, + 387 + ], + "type": "text", + "content": "Merging Weights. We adopt uniform weights across all adapters for adapter merging, rather than task-specific weights, as we do not wish to prioritize any individual task. Figure 4(d) shows the effect of different merging weights (0.2, 0.3, 0.4) for concatenated merging with LoRI-S. We observe that LoRI is moderately sensitive to merging weights, with a noticeable trade-off between performance on code tasks and other domains. We adopt 0.3 for all adapters in LoRI-S merging, as it offers a balanced performance across domains." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 401, + 189, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 401, + 189, + 415 + ], + "spans": [ + { + "bbox": [ + 105, + 401, + 189, + 415 + ], + "type": "text", + "content": "4 Conclusion" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "text", + "content": "In this work, we introduced LoRI, a simple yet effective approach to parameter-efficient fine-tuning (PEFT) that substantially reduces trainable parameters while minimizing cross-task interference. By freezing the projection matrices " + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "text", + "content": " as random projections and sparsifying " + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "text", + "content": " using task-specific masks, LoRI achieves strong single-task performance across diverse domains – including natural language understanding, mathematical reasoning, code generation, and safety alignment – while reducing trainable parameters by up to " + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 104, + 426, + 506, + 538 + ], + "type": "text", + "content": " compared to LoRA. Furthermore, LoRI enables training-free adapter merging with minimal performance degradation, and supports continual learning with significantly reduced catastrophic forgetting. It also provides a lightweight approach to building safety adapters that preserve the safety alignment of the base model." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 548, + 506, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 548, + 506, + 617 + ], + "spans": [ + { + "bbox": [ + 104, + 548, + 506, + 617 + ], + "type": "text", + "content": "Future Work. We identify several promising avenues for extending this work. While LoRI currently leverages unstructured magnitude-based sparsity, future research can explore structured sparsity patterns – such as block sparsity, head pruning, or group-wise masking – which may offer better hardware compatibility. Additionally, although this study focuses on LLMs, the core design of LoRI is modality-agnostic. Extending LoRI to diffusion and vision-language models for multi-modal generation is a promising direction." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 632, + 219, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 632, + 219, + 647 + ], + "spans": [ + { + "bbox": [ + 105, + 632, + 219, + 647 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 657, + 505, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 657, + 505, + 703 + ], + "spans": [ + { + "bbox": [ + 104, + 657, + 505, + 703 + ], + "type": "text", + "content": "This material is based upon work partially supported by the NSF Grant No. 2229885 (NSF Institute for Trustworthy AI in Law and Society, TRAILS). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "spans": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 99, + 505, + 732 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 105, + 99, + 505, + 134 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 99, + 505, + 134 + ], + "spans": [ + { + "bbox": [ + 105, + 99, + 505, + 134 + ], + "type": "text", + "content": "Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Röttger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions. arXiv preprint arXiv:2309.07875, 2023." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 139, + 505, + 174 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 139, + 505, + 174 + ], + "spans": [ + { + "bbox": [ + 105, + 139, + 505, + 174 + ], + "type": "text", + "content": "Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432-7439, 2020." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 178, + 505, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 178, + 505, + 224 + ], + "spans": [ + { + "bbox": [ + 106, + 178, + 505, + 224 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 229, + 404, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 229, + 404, + 243 + ], + "spans": [ + { + "bbox": [ + 105, + 229, + 404, + 243 + ], + "type": "text", + "content": "Rich Caruana. Multitask learning. Machine learning, 28:41-75, 1997." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 247, + 504, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 247, + 504, + 270 + ], + "spans": [ + { + "bbox": [ + 105, + 247, + 504, + 270 + ], + "type": "text", + "content": "Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 277, + 504, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 277, + 504, + 312 + ], + "spans": [ + { + "bbox": [ + 105, + 277, + 504, + 312 + ], + "type": "text", + "content": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 317, + 504, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 317, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 106, + 317, + 504, + 361 + ], + "type": "text", + "content": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 368, + 504, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 368, + 504, + 402 + ], + "spans": [ + { + "bbox": [ + 105, + 368, + 504, + 402 + ], + "type": "text", + "content": "Alexandra Chronopoulou, Matthew E Peters, Alexander Fraser, and Jesse Dodge. *Adaptersoup: Weight averaging to improve generalization of pretrained language models.* arXiv preprint arXiv:2302.07027, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 407, + 504, + 441 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 407, + 504, + 441 + ], + "spans": [ + { + "bbox": [ + 105, + 407, + 504, + 441 + ], + "type": "text", + "content": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 448, + 504, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 448, + 504, + 482 + ], + "spans": [ + { + "bbox": [ + 105, + 448, + 504, + 482 + ], + "type": "text", + "content": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 487, + 504, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 487, + 504, + 522 + ], + "spans": [ + { + "bbox": [ + 105, + 487, + 504, + 522 + ], + "type": "text", + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 527, + 504, + 562 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 527, + 504, + 562 + ], + "spans": [ + { + "bbox": [ + 105, + 527, + 504, + 562 + ], + "type": "text", + "content": "Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 567, + 504, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 567, + 504, + 612 + ], + "spans": [ + { + "bbox": [ + 105, + 567, + 504, + 612 + ], + "type": "text", + "content": "Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. How abilities in large language models are affected by supervised fine-tuning data composition. arXiv preprint arXiv:2310.05492, 2023." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 618, + 504, + 652 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 618, + 504, + 652 + ], + "spans": [ + { + "bbox": [ + 105, + 618, + 504, + 652 + ], + "type": "text", + "content": "Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, and Jia Li. Parameter-efficient fine-tuning with discrete fourier transform. arXiv preprint arXiv:2405.03003, 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "spans": [ + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "type": "text", + "content": "Aaron Grattaftori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 698, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 698, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 698, + 504, + 732 + ], + "type": "text", + "content": "Han Guo, Philip Greengard, Eric P Xing, and Yoon Kim. Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning. arXiv preprint arXiv:2311.12023, 2023." + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 733 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 505, + 105 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 505, + 105 + ], + "type": "text", + "content": "Soufiane Hayou, Nikhil Ghosh, and Bin Yu. Lora+: Efficient low rank adaptation of large models. arXiv preprint arXiv:2402.12354, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 111, + 505, + 147 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 111, + 505, + 147 + ], + "spans": [ + { + "bbox": [ + 105, + 111, + 505, + 147 + ], + "type": "text", + "content": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 152, + 505, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 152, + 505, + 196 + ], + "spans": [ + { + "bbox": [ + 105, + 152, + 505, + 196 + ], + "type": "text", + "content": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Larous-silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp. 2790-2799. PMLR, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 204, + 505, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 505, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 505, + 239 + ], + "type": "text", + "content": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 244, + 505, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 244, + 505, + 280 + ], + "spans": [ + { + "bbox": [ + 105, + 244, + 505, + 280 + ], + "type": "text", + "content": "Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka-Wei Lee. Llm-adapters: An adapter family for parameter-efficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933, 2023." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 285, + 505, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 285, + 505, + 319 + ], + "spans": [ + { + "bbox": [ + 105, + 285, + 505, + 319 + ], + "type": "text", + "content": "Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. Lorahub: Efficient cross-task generalization via dynamic lora composition. arXiv preprint arXiv:2307.13269, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 326, + 505, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 326, + 505, + 360 + ], + "spans": [ + { + "bbox": [ + 105, + 326, + 505, + 360 + ], + "type": "text", + "content": "Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. Editing models with task arithmetic. arXiv preprint arXiv:2212.04089, 2022." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 367, + 505, + 391 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 367, + 505, + 391 + ], + "spans": [ + { + "bbox": [ + 105, + 367, + 505, + 391 + ], + "type": "text", + "content": "Leonardo Iurada, Marco Ciccone, and Tatiana Tommasi. Efficient model editing with task-localized sparse fine-tuning. arXiv preprint arXiv:2504.02620, 2025." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 396, + 505, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 396, + 505, + 431 + ], + "spans": [ + { + "bbox": [ + 105, + 396, + 505, + 431 + ], + "type": "text", + "content": "Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 437, + 505, + 461 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 437, + 505, + 461 + ], + "spans": [ + { + "bbox": [ + 105, + 437, + 505, + 461 + ], + "type": "text", + "content": "Damjan Kalajdzievski. A rank stabilization scaling factor for fine-tuning with lora. arXiv preprint arXiv:2312.03732, 2023." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 467, + 505, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 467, + 505, + 502 + ], + "spans": [ + { + "bbox": [ + 105, + 467, + 505, + 502 + ], + "type": "text", + "content": "Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. Parameter-level soft-masking for continual learning. In International Conference on Machine Learning, pp. 17492-17505. PMLR, 2023." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 507, + 505, + 532 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 507, + 505, + 532 + ], + "spans": [ + { + "bbox": [ + 105, + 507, + 505, + 532 + ], + "type": "text", + "content": "Dawid J Kopiczko, Tijmen Blankevoort, and Yuki M Asano. Vera: Vector-based random matrix adaptation. arXiv preprint arXiv:2310.11454, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 537, + 505, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 537, + 505, + 563 + ], + "spans": [ + { + "bbox": [ + 105, + 537, + 505, + 563 + ], + "type": "text", + "content": "Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 567, + 505, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 567, + 505, + 592 + ], + "spans": [ + { + "bbox": [ + 105, + 567, + 505, + 592 + ], + "type": "text", + "content": "Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 597, + 505, + 621 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 597, + 505, + 621 + ], + "spans": [ + { + "bbox": [ + 105, + 597, + 505, + 621 + ], + "type": "text", + "content": "Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31, 2018." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 628, + 505, + 651 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 628, + 505, + 651 + ], + "spans": [ + { + "bbox": [ + 105, + 628, + 505, + 651 + ], + "type": "text", + "content": "Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190, 2021." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 657, + 505, + 681 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 657, + 505, + 681 + ], + "spans": [ + { + "bbox": [ + 105, + 657, + 505, + 681 + ], + "type": "text", + "content": "Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935-2947, 2017." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 686, + 505, + 733 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 686, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 686, + 505, + 733 + ], + "type": "text", + "content": "Zujie Liang, Feng Wei, Yin Jie, Yuxi Qian, Zhenghong Hao, and Bing Han. Prompts can play lottery tickets well: Achieving lifelong information extraction via lottery prompt tuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 277-292, 2023." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 732 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, 35:1950-1965, 2022." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 504, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 504, + 159 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 504, + 159 + ], + "type": "text", + "content": "Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. Dora: Weight-decomposed low-rank adaptation. In *Forty-first International Conference on Machine Learning*, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 165, + 504, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 165, + 504, + 201 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 504, + 201 + ], + "type": "text", + "content": "Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Lam Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprint arXiv:2110.07602, 2021." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 206, + 504, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 206, + 504, + 232 + ], + "spans": [ + { + "bbox": [ + 105, + 206, + 504, + 232 + ], + "type": "text", + "content": "David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 237, + 504, + 272 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 237, + 504, + 272 + ], + "spans": [ + { + "bbox": [ + 105, + 237, + 504, + 272 + ], + "type": "text", + "content": "Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747, 2023." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 279, + 504, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 279, + 504, + 315 + ], + "spans": [ + { + "bbox": [ + 105, + 279, + 504, + 315 + ], + "type": "text", + "content": "Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7765-7773, 2018." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 321, + 504, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 321, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 105, + 321, + 504, + 346 + ], + "type": "text", + "content": "Michael S Matena and Colin A Raffel. Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems, 35:17703-17716, 2022." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 352, + 504, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 352, + 504, + 387 + ], + "spans": [ + { + "bbox": [ + 105, + 352, + 504, + 387 + ], + "type": "text", + "content": "Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of learning and motivation*, volume 24, pp. 109-165. Elsevier, 1989." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 394, + 504, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 394, + 504, + 429 + ], + "spans": [ + { + "bbox": [ + 105, + 394, + 504, + 429 + ], + "type": "text", + "content": "Fanxu Meng, Zhaohui Wang, and Muhan Zhang. Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37:121038-121072, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 435, + 504, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 435, + 504, + 470 + ], + "spans": [ + { + "bbox": [ + 105, + 435, + 504, + 470 + ], + "type": "text", + "content": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 477, + 504, + 512 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 477, + 504, + 512 + ], + "spans": [ + { + "bbox": [ + 105, + 477, + 504, + 512 + ], + "type": "text", + "content": "Guillermo Ortiz-Jimenez, Alessandro Favero, and Pascal Frossard. Task arithmetic in the tangent space: Improved editing of pre-trained models. Advances in Neural Information Processing Systems, 36:66727-66754, 2023." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 519, + 504, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 519, + 504, + 565 + ], + "spans": [ + { + "bbox": [ + 105, + 519, + 504, + 565 + ], + "type": "text", + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 572, + 504, + 607 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 572, + 504, + 607 + ], + "spans": [ + { + "bbox": [ + 105, + 572, + 504, + 607 + ], + "type": "text", + "content": "Ashwinee Panda, Berivan Isik, Xiangyu Qi, Sanmi Koyejo, Tsachy Weissman, and Pra-tek Mittal. Lottery ticket adaptation: Mitigating destructive interference in llms. arXiv preprint arXiv:2406.16797, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 613, + 504, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 613, + 504, + 648 + ], + "spans": [ + { + "bbox": [ + 105, + 613, + 504, + 648 + ], + "type": "text", + "content": "Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247, 2020." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 655, + 504, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 655, + 504, + 690 + ], + "spans": [ + { + "bbox": [ + 105, + 655, + 504, + 690 + ], + "type": "text", + "content": "Akshara Prabhakar, Yuanzhi Li, Karthik Narasimhan, Sham Kakade, Eran Malach, and Samy Jelassi. Lora soups: Merging loras for practical skill composition tasks. arXiv preprint arXiv:2410.13025, 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "type": "text", + "content": "Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693, 2023." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 732 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "type": "text", + "content": "Vinay Venkatesh Ramasesh, Aitor Lewkowycz, and Ethan Dyer. Effect of scale on catastrophic forgetting in neural networks. In International Conference on Learning Representations, 2021." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "type": "text", + "content": "David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in neural information processing systems, 32, 2019." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "type": "text", + "content": "Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 205, + 505, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 205, + 505, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 205, + 505, + 239 + ], + "type": "text", + "content": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9): 99-106, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 246, + 505, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 246, + 505, + 270 + ], + "spans": [ + { + "bbox": [ + 105, + 246, + 505, + 270 + ], + "type": "text", + "content": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728, 2019." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 277, + 505, + 301 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 277, + 505, + 301 + ], + "spans": [ + { + "bbox": [ + 105, + 277, + 505, + 301 + ], + "type": "text", + "content": "Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 307, + 505, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 307, + 505, + 331 + ], + "spans": [ + { + "bbox": [ + 105, + 307, + 505, + 331 + ], + "type": "text", + "content": "Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 337, + 505, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 337, + 505, + 361 + ], + "spans": [ + { + "bbox": [ + 105, + 337, + 505, + 361 + ], + "type": "text", + "content": "George Stoica, Pratik Ramesh, Boglarka Ecsedi, Leshem Choshen, and Judy Hoffman. Model merging with svd to tie the knots. arXiv preprint arXiv:2410.19735, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 368, + 505, + 402 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 368, + 505, + 402 + ], + "spans": [ + { + "bbox": [ + 105, + 368, + 505, + 402 + ], + "type": "text", + "content": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 408, + 505, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 408, + 505, + 443 + ], + "spans": [ + { + "bbox": [ + 105, + 408, + 505, + 443 + ], + "type": "text", + "content": "Chunlin Tian, Zhan Shi, Zhijiang Guo, Li Li, and Cheng-Zhong Xu. Hydralora: An asymmetric lora architecture for efficient fine-tuning. Advances in Neural Information Processing Systems, 37:9565-9584, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 450, + 505, + 494 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 450, + 505, + 494 + ], + "spans": [ + { + "bbox": [ + 105, + 450, + 505, + 494 + ], + "type": "text", + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 502, + 505, + 537 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 502, + 505, + 537 + ], + "spans": [ + { + "bbox": [ + 105, + 502, + 505, + 537 + ], + "type": "text", + "content": "Hanqing Wang, Bowen Ping, Shuo Wang, Xu Han, Yun Chen, Zhiyuan Liu, and Maosong Sun. Lora-flow: Dynamic lora fusion for large language models in generative tasks. arXiv preprint arXiv:2402.11455, 2024a." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 544, + 505, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 544, + 505, + 578 + ], + "spans": [ + { + "bbox": [ + 105, + 544, + 505, + 578 + ], + "type": "text", + "content": "Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024b." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 585, + 505, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 585, + 505, + 620 + ], + "spans": [ + { + "bbox": [ + 105, + 585, + 505, + 620 + ], + "type": "text", + "content": "Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, and Xuanjing Huang. Orthogonal subspace learning for language model continual learning. arXiv preprint arXiv:2310.14152, 2023." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 626, + 505, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 626, + 505, + 661 + ], + "spans": [ + { + "bbox": [ + 105, + 626, + 505, + 661 + ], + "type": "text", + "content": "Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan Fang Li, Guilin Qi, and Gholamreza Haffari. Pretrained language model in continual learning: A comparative study. In International Conference on Learning Representations 2022. OpenReview, 2022." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 667, + 505, + 691 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 667, + 505, + 691 + ], + "spans": [ + { + "bbox": [ + 105, + 667, + 505, + 691 + ], + "type": "text", + "content": "Xun Wu, Shaohan Huang, and Furu Wei. Mixture of lora experts. arXiv preprint arXiv:2404.13628, 2024." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 697, + 505, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 697, + 505, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 697, + 505, + 732 + ], + "type": "text", + "content": "Feng Xiong, Runxi Cheng, Wang Chen, Zhanqiu Zhang, Yiwen Guo, Chun Yuan, and Ruifeng Xu. Multi-task model merging via adaptive weight disentanglement. arXiv preprint arXiv:2411.18729, 2024." + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 443 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 116 + ], + "type": "text", + "content": "Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36:7093-7115, 2023." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "type": "text", + "content": "Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In *Forty-first International Conference on Machine Learning*, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 163, + 504, + 187 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 163, + 504, + 187 + ], + "spans": [ + { + "bbox": [ + 107, + 163, + 504, + 187 + ], + "type": "text", + "content": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 193, + 504, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 193, + 504, + 228 + ], + "spans": [ + { + "bbox": [ + 107, + 193, + 504, + 228 + ], + "type": "text", + "content": "Feiyu Zhang, Liangzhi Li, Junhao Chen, Zhouqiang Jiang, Bowen Wang, and Yiming Qian. Increlora: Incremental parameter allocation method for parameter-efficient fine-tuning. arXiv preprint arXiv:2308.12043, 2023a." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 234, + 504, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 234, + 504, + 267 + ], + "spans": [ + { + "bbox": [ + 107, + 234, + 504, + 267 + ], + "type": "text", + "content": "Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, and Bo Li. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv preprint arXiv:2308.03303, 2023b." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 275, + 504, + 309 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 275, + 504, + 309 + ], + "spans": [ + { + "bbox": [ + 107, + 275, + 504, + 309 + ], + "type": "text", + "content": "Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, and Bohan Zhuang. Loraprune: Pruning meets low-rank parameter-efficient fine-tuning. arXiv preprint arXiv:2305.18403, 2023c." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 316, + 504, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 316, + 504, + 350 + ], + "spans": [ + { + "bbox": [ + 105, + 316, + 504, + 350 + ], + "type": "text", + "content": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Nikos Karampatziakis, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adalora: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512, 2023d." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 357, + 504, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 357, + 504, + 390 + ], + "spans": [ + { + "bbox": [ + 107, + 357, + 504, + 390 + ], + "type": "text", + "content": "Hongyun Zhou, Xiangyu Lu, Wang Xu, Conghui Zhu, Tiejun Zhao, and Muyun Yang. Lora-drop: Efficient lora parameter pruning based on output evaluation. arXiv preprint arXiv:2402.07721, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 397, + 504, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 397, + 504, + 443 + ], + "spans": [ + { + "bbox": [ + 105, + 397, + 504, + 443 + ], + "type": "text", + "content": "Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez De Ocariz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, and Justin Solomon. Asymmetry in low-rank adapters of foundation models. arXiv preprint arXiv:2402.16842, 2024." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 80, + 209, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 80, + 209, + 93 + ], + "spans": [ + { + "bbox": [ + 105, + 80, + 209, + 93 + ], + "type": "text", + "content": "A Related Works" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "spans": [ + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": "Parameter-Efficient Fine-Tuning. Parameter-efficient fine-tuning (PEFT) methods for LLMs (Houlsby et al., 2019; Pfeiffer et al., 2020; Li & Liang, 2021; Lester et al., 2021; Liu et al., 2021; Hu et al., 2021) have received increasing attention in recent years. Among them, LoRA (Hu et al., 2021), which introduces trainable low-rank matrices, has become one of the most widely adopted PEFT methods due to its strong performance and efficiency. LoRI is motivated by reducing parameter redundancy in LoRA through an asymmetric design: we freeze the projection matrices " + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": " and enforce sparsity on the matrices " + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": ". Our work is closely related to several lines of research. In terms of parameter efficiency, our goal is shared by methods such as IA3 (Liu et al., 2022), VeRA (Kopiczko et al., 2023), and FourierFT (Gao et al., 2024). More specifically, our approach builds on the concept of asymmetric LoRA variants, which has been explored in works like LoRA-FA (Zhang et al., 2023b), AsymmetryLoRA (Zhu et al., 2024), and HydraLoRA (Tian et al., 2024). However, LoRI is distinct from these works by uniquely combining frozen " + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": " with sparsely updated " + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": ". This targeted, asymmetric pruning of only the " + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 106, + 506, + 295 + ], + "type": "text", + "content": " matrices also differentiates our method from general LoRA pruning techniques like Loraprune (Zhang et al., 2023c), LoRADrop (Zhou et al., 2024), and SoRA (Ding et al., 2023), as well as SVD-based approaches such as AdaLoRA (Zhang et al., 2023d) and PiSSA (Meng et al., 2024)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 308, + 506, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 308, + 506, + 497 + ], + "spans": [ + { + "bbox": [ + 104, + 308, + 506, + 497 + ], + "type": "text", + "content": "Model Merging. Achieving multi-task capabilities typically involves training on a mixture of diverse task datasets (Caruana, 1997; Sener & Koltun, 2018), which is often prohibitively expensive in time and compute. As an alternative, model merging has gained attention for combining multiple task-specific models into a single model (Matena & Raffel, 2022; Ilharco et al., 2022; Yadav et al., 2023; Yu et al., 2024). Fisher Merging (Matena & Raffel, 2022) uses weights from the Fisher information matrix to combine parameters, while Task Arithmetic (Ilharco et al., 2022) employs predefined scaling factors. TIES-Merging (Yadav et al., 2023) prunes low-magnitude parameters and merges those with consistent signs, and DARE (Yu et al., 2024) applies random pruning with rescaling. However, identifying the optimal merging method often requires trial and error. More recently, there has been growing interest in merging task-specific LoRA adapters (Chronopoulou et al., 2023; Huang et al., 2023; Wu et al., 2024; Wang et al., 2024a; Prabhakar et al., 2024; Stoica et al., 2024), often utilizing Mixture-of-Experts (MoE) architectures. Nonetheless, these methods typically require additional training to coordinate the adapters effectively. In contrast, LoRI eliminates the need for manual selection of merging methods or additional training. By ensuring approximate orthogonality between adapters, LoRI minimizes interference and preserves task-specific performance." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 510, + 506, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 510, + 506, + 676 + ], + "spans": [ + { + "bbox": [ + 104, + 510, + 506, + 676 + ], + "type": "text", + "content": "Catastrophic Forgetting. Catastrophic forgetting is a fundamental challenge in continual learning (McCloskey & Cohen, 1989; Ramasesh et al., 2021; Liang et al., 2023; Wang et al., 2024b), where neural networks struggle to retain previously learned knowledge when adapting to new tasks. Wu et al. (2022) analyzed this phenomenon using layer-wise and task-wise probing to assess knowledge retention across tasks. Several studies (Dong et al., 2023; Luo et al., 2023) have empirically examined catastrophic forgetting in the continual fine-tuning of LLMs. To mitigate catastrophic forgetting, various approaches have been proposed. Rehearsal-based methods (Rolnick et al., 2019; Shin et al., 2017) store or generate past data to reinforce prior knowledge during training. Parameter isolation methods (Rusu et al., 2016; Mallya & Lazebnik, 2018; Konishi et al., 2023; Panda et al., 2024) allocate separate subnetworks or sparsely mask parameters for different tasks to prevent interference. Additionally, O-LoRA (Wang et al., 2023) learns tasks in distinct low-rank subspaces while ensuring orthogonality between them. LoRI falls under the category of parameter isolation methods, leveraging sparse task-specific masks to mitigate catastrophic forgetting during continual learning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 693, + 230, + 708 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 693, + 230, + 708 + ], + "spans": [ + { + "bbox": [ + 105, + 693, + 230, + 708 + ], + "type": "text", + "content": "B Algorithm of LoRI" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 719, + 362, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 719, + 362, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 719, + 362, + 733 + ], + "type": "text", + "content": "The full procedure of LoRI is summarized in Algorithm 1." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 106, + 111, + 505, + 404 + ], + "blocks": [ + { + "bbox": [ + 106, + 94, + 345, + 107 + ], + "lines": [ + { + "bbox": [ + 106, + 94, + 345, + 107 + ], + "spans": [ + { + "bbox": [ + 106, + 94, + 345, + 107 + ], + "type": "text", + "content": "Algorithm 1: LoRA with Reduced Interference (LoRI)" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "lines": [ + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "spans": [ + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": "Require: Task " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " , mask calibration dataset " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t^C" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " , adaptation dataset " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " , sparsity ratio " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " , model " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " loss function " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " , learning rate " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\eta_t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " \n1: for each layer " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "l = 1,\\ldots ,L" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do \n2: for each projection " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "m = 1,\\dots ,M" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do \n3: Initialize: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "A_{t}^{(l,m)}\\in \\mathbb{R}^{d_{\\mathrm{in}}\\times r}\\leftarrow \\mathcal{U}(-\\sqrt{\\frac{3}{d_{\\mathrm{in}}}},\\sqrt{\\frac{3}{d_{\\mathrm{in}}}}),B_{t}^{(l,m)}\\in \\mathbb{R}^{r\\times d_{\\mathrm{out}}}\\leftarrow 0" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " \n4: end for \n5: end for \n6: for each batch " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " sampled from " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t^C" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do ▷ Calibration steps \n7: for each " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(l,m)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do \n8: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " \n9: end for \n10: end for \n11: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\tau_t\\gets \\mathrm{Quantile}_s\\left(\\bigcup_{l,m}|B_t^{(l,m)}|\\right)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " ▷ Compute global threshold " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\tau_t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " \n12: for each " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(l,m)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do \n13: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "M_t^{(l,m)}\\gets \\mathbb{I}\\left(|B_t^{(l,m)}|\\geq \\tau_t\\right)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " ▷ Generate mask for top- " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(1 - s)\\%" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " entries \n14: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "B_{t}^{(l,m)}\\gets 0" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " ▷ Reset to zero before adaptation \n15: end for \n16: for each batch " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " sampled from " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_t" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do ▷ Adaptation steps \n17: for each " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "(l,m)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " do \n18: " + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "inline_equation", + "content": "B_{t}^{(l,m)}\\gets B_{t}^{(l,m)} - \\eta_{t}\\cdot \\left(\\nabla_{B_{t}^{(l,m)}}\\mathcal{L}_{t}(f(x,y;B_{t}^{(l,m)}))\\odot M_{t}^{(l,m)}\\right)" + }, + { + "bbox": [ + 106, + 111, + 505, + 404 + ], + "type": "text", + "content": " \n19: end for \n20: end for" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_body" + } + ], + "index": 2, + "sub_type": "algorithm" + }, + { + "bbox": [ + 105, + 425, + 231, + 440 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 425, + 231, + 440 + ], + "spans": [ + { + "bbox": [ + 105, + 425, + 231, + 440 + ], + "type": "text", + "content": "C Proof of Property 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "spans": [ + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "text", + "content": "Proof. Our goal is to show that the Frobenius inner product " + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "inline_equation", + "content": "\\langle \\Delta_s, \\Delta_t \\rangle_F" + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "text", + "content": " converges to zero in probability. Let " + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "inline_equation", + "content": "\\tilde{B}_s = B_s \\odot M_s" + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "inline_equation", + "content": "\\tilde{B}_t = B_t \\odot M_t" + }, + { + "bbox": [ + 104, + 451, + 504, + 477 + ], + "type": "text", + "content": ". The inner product is given by:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 214, + 479, + 504, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 479, + 504, + 496 + ], + "spans": [ + { + "bbox": [ + 214, + 479, + 504, + 496 + ], + "type": "interline_equation", + "content": "\\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} = \\operatorname {T r} \\left(\\Delta_ {s} ^ {\\top} \\Delta_ {t}\\right) = \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} A _ {s} ^ {\\top} A _ {t} \\tilde {B} _ {t}\\right). \\tag {9}", + "image_path": "6a2f5092ef54892e155c66571e928c628d5ab79d36be9f1b478d7394f8a4f45a.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "text", + "content": "We will prove this by showing that the random matrix " + }, + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "inline_equation", + "content": "X = A_{s}^{\\top}A_{t}" + }, + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "text", + "content": " converges to the zero matrix in probability as " + }, + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{in}} \\to \\infty" + }, + { + "bbox": [ + 104, + 500, + 504, + 525 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "spans": [ + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "a_{s}^{k}, a_{t}^{l} \\in \\mathbb{R}^{d_{\\mathrm{in}}}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": " be the " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": "-th and " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": "-th columns of " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "A_{s}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "A_{t}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": ", respectively. The entries of these vectors are i.i.d. from a Kaiming Uniform distribution " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "U[-a, a]" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "a = \\sqrt{3 / d_{\\mathrm{in}}}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": ". This implies a mean of 0 and variance of " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "\\sigma^2 = a^2 / 3 = 1 / d_{\\mathrm{in}}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": ". An entry of " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": " is the inner product " + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "inline_equation", + "content": "X_{kl} = (a_{s}^{k})^{\\top} a_{t}^{l} = \\sum_{i=1}^{d_{\\mathrm{in}}} (A_{s})_{ik} (A_{t})_{il}" + }, + { + "bbox": [ + 104, + 529, + 505, + 584 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "Z_{i} = (A_{s})_{ik}(A_{t})_{il}" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": ". The terms " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "Z_{i}" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": " are i.i.d. with " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "\\mathbb{E}[Z_i] = \\mathbb{E}[(A_s)_{ik}]\\mathbb{E}[(A_t)_{il}] = 0" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": ". Each term is bounded: " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "|Z_{i}| \\leq a^{2} = 3 / d_{\\mathrm{in}}" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": ". We apply Hoeffding's inequality to the sum " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "\\sum_{i=1}^{d_{\\mathrm{in}}} Z_{i}" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": ", where each term lies in " + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "inline_equation", + "content": "[-3 / d_{\\mathrm{in}}, 3 / d_{\\mathrm{in}}]" + }, + { + "bbox": [ + 104, + 588, + 505, + 628 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 122, + 632, + 505, + 666 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 122, + 632, + 505, + 666 + ], + "spans": [ + { + "bbox": [ + 122, + 632, + 505, + 666 + ], + "type": "interline_equation", + "content": "\\mathbb {P} \\left(\\left| X _ {k l} \\right| \\geq t\\right) = \\mathbb {P} \\left(\\left| \\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} Z _ {i} \\right| \\geq t\\right) \\leq 2 \\exp \\left(\\frac {- 2 t ^ {2}}{\\sum_ {i = 1} ^ {d _ {\\mathrm {i n}}} (6 / d _ {\\mathrm {i n}}) ^ {2}}\\right) = 2 \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {10}", + "image_path": "bdd09b57284eea17153f4ed2273ef3fe864860dc04f9683f92fb0b576fccca3a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "spans": [ + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "text", + "content": "We now bound the probability that any of the " + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "inline_equation", + "content": "r^2" + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "text", + "content": " entries of " + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "text", + "content": " exceeds a threshold " + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 675, + 504, + 699 + ], + "type": "text", + "content": " using the union bound:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 111, + 701, + 505, + 735 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 701, + 505, + 735 + ], + "spans": [ + { + "bbox": [ + 111, + 701, + 505, + 735 + ], + "type": "interline_equation", + "content": "\\mathbb {P} \\left(\\max _ {k, l} | X _ {k l} | \\geq t\\right) = \\mathbb {P} \\left(\\bigcup_ {k, l = 1} ^ {r} \\{| X _ {k l} | \\geq t \\}\\right) \\leq \\sum_ {k, l = 1} ^ {r} \\mathbb {P} \\left(| X _ {k l} | \\geq t\\right) \\leq 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right). \\tag {11}", + "image_path": "c7781c2ddb543e490b58ea53ffcfbe9d05d098b5eb71dd9d376b6fa459d7b8bf.jpg" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 100, + 504, + 230 + ], + "blocks": [ + { + "bbox": [ + 170, + 79, + 440, + 92 + ], + "lines": [ + { + "bbox": [ + 170, + 79, + 440, + 92 + ], + "spans": [ + { + "bbox": [ + 170, + 79, + 440, + 92 + ], + "type": "text", + "content": "Table 4: Hyperparameter settings for LoRI on NLU datasets." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 100, + 504, + 230 + ], + "lines": [ + { + "bbox": [ + 108, + 100, + 504, + 230 + ], + "spans": [ + { + "bbox": [ + 108, + 100, + 504, + 230 + ], + "type": "table", + "html": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-41e-51e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
", + "image_path": "64c8ddd644dd9eebd26a8802b40d9d415be03562dcaf162028b63887cd978290.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "content": "We can now show that " + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "inline_equation", + "content": "\\| X \\|_F" + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "content": " is small with high probability. Let the failure probability be " + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "content": ". By setting the bound from the previous step to " + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "inline_equation", + "content": "\\delta" + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "content": ", we can solve for " + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 252, + 504, + 277 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 194, + 287, + 505, + 319 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 287, + 505, + 319 + ], + "spans": [ + { + "bbox": [ + 194, + 287, + 505, + 319 + ], + "type": "interline_equation", + "content": "\\delta = 2 r ^ {2} \\exp \\left(\\frac {- t ^ {2} d _ {\\mathrm {i n}}}{1 8}\\right) \\Longrightarrow t = \\sqrt {\\frac {1 8 \\log \\left(2 r ^ {2} / \\delta\\right)}{d _ {\\mathrm {i n}}}}. \\tag {12}", + "image_path": "f263117ba513fd9176a815e32e7332251515c8cccf9b4d1dd203f5e174f6ace9.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "spans": [ + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "text", + "content": "With probability at least " + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "text", + "content": ", we have " + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "inline_equation", + "content": "\\max_{k,l} |X_{kl}| \\leq t" + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "text", + "content": ". This allows us to bound the Frobenius norm of " + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 104, + 328, + 506, + 352 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 208, + 361, + 505, + 389 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 208, + 361, + 505, + 389 + ], + "spans": [ + { + "bbox": [ + 208, + 361, + 505, + 389 + ], + "type": "interline_equation", + "content": "\\left\\| X \\right\\| _ {F} ^ {2} = \\sum_ {k, l = 1} ^ {r} \\left| X _ {k l} \\right| ^ {2} \\leq r ^ {2} \\left(\\max _ {k, l} \\left| X _ {k l} \\right|\\right) ^ {2} \\leq r ^ {2} t ^ {2}. \\tag {13}", + "image_path": "887e55d4ad89163bde79c57cc159c33e8bdf357b9e42a7aaa489f09393a10f65.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 399, + 268, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 399, + 268, + 412 + ], + "spans": [ + { + "bbox": [ + 105, + 399, + 268, + 412 + ], + "type": "text", + "content": "Thus, with probability at least " + }, + { + "bbox": [ + 105, + 399, + 268, + 412 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 105, + 399, + 268, + 412 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 192, + 422, + 505, + 456 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 422, + 505, + 456 + ], + "spans": [ + { + "bbox": [ + 192, + 422, + 505, + 456 + ], + "type": "interline_equation", + "content": "\\| X \\| _ {F} \\leq r \\cdot t = r \\sqrt {\\frac {1 8 \\log (2 r ^ {2} / \\delta)}{d _ {\\mathrm {i n}}}} = O \\left(r \\sqrt {\\frac {\\log r}{d _ {\\mathrm {i n}}}}\\right). \\tag {14}", + "image_path": "4d9dad02b32518b5385892b3fb0072e07e405fa5096930d1c3a9e394ad57b403.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "spans": [ + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "content": "Since " + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "inline_equation", + "content": "r \\ll d_{\\mathrm{in}}" + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "content": ", the term " + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "inline_equation", + "content": "\\| X \\|_F \\to 0" + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "content": " as " + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{in}} \\to \\infty" + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "content": ". This shows that " + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "inline_equation", + "content": "X" + }, + { + "bbox": [ + 104, + 466, + 504, + 490 + ], + "type": "text", + "content": " converges to the zero matrix in probability." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 494, + 505, + 527 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 494, + 505, + 527 + ], + "spans": [ + { + "bbox": [ + 104, + 494, + 505, + 527 + ], + "type": "text", + "content": "Finally, we bound the magnitude of the original inner product using the Cauchy-Schwarz inequality for the Frobenius inner product and the sub-multiplicative property of the Frobenius norm:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 212, + 531, + 504, + 578 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 531, + 504, + 578 + ], + "spans": [ + { + "bbox": [ + 212, + 531, + 504, + 578 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\left| \\left\\langle \\Delta_ {s}, \\Delta_ {t} \\right\\rangle_ {F} \\right| = \\left| \\operatorname {T r} \\left(\\tilde {B} _ {s} ^ {\\top} X \\tilde {B} _ {t}\\right) \\right| = \\left| \\left\\langle \\tilde {B} _ {s}, X \\tilde {B} _ {t} \\right\\rangle_ {F} \\right| \\\\ \\leq \\left\\| \\tilde {B} _ {s} \\right\\| _ {F} \\| X \\tilde {B} _ {t} \\| _ {F} \\tag {15} \\\\ \\leq \\| \\tilde {B} _ {s} \\| _ {F} \\| X \\| _ {F} \\| \\tilde {B} _ {t} \\| _ {F}. \\\\ \\end{array}", + "image_path": "7047bc959650d5b78d0f91b4475498ec6d1bd0f2863f445c3db2b4aff1f6e6ad.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "spans": [ + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "text", + "content": "The norms " + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "inline_equation", + "content": "\\| \\tilde{B}_s\\| _F" + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "inline_equation", + "content": "\\| \\tilde{B}_t\\| _F" + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "text", + "content": " are finite, as determined by the trained adapters. Since we have shown that " + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "inline_equation", + "content": "\\| X\\| _F\\to 0" + }, + { + "bbox": [ + 104, + 587, + 506, + 622 + ], + "type": "text", + "content": " in probability, the entire expression must also converge to 0 in probability." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 642, + 268, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 642, + 268, + 657 + ], + "spans": [ + { + "bbox": [ + 105, + 642, + 268, + 657 + ], + "type": "text", + "content": "D Hyperparameter Settings" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 670, + 504, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 670, + 504, + 704 + ], + "spans": [ + { + "bbox": [ + 104, + 670, + 504, + 704 + ], + "type": "text", + "content": "We summarize the hyperparameter settings used for LoRI in Tables 4, 5, 6, and 7. These include settings for different tasks (NLU, math, code, safety), adapter variants (LoRI-D, LoRI-S), base models (Llama-3-8B and Mistral-7B), and ranks (32 and 64)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "For the merging experiments, the hyperparameter settings for merging four adapters are provided in Tables 8 and 9, while those for merging three adapters are provided in Table 10." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 102, + 504, + 232 + ], + "blocks": [ + { + "bbox": [ + 146, + 82, + 464, + 95 + ], + "lines": [ + { + "bbox": [ + 146, + 82, + 464, + 95 + ], + "spans": [ + { + "bbox": [ + 146, + 82, + 464, + 95 + ], + "type": "text", + "content": "Table 5: Hyperparameter settings for LoRI on the math dataset GSM8K." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 102, + 504, + 232 + ], + "lines": [ + { + "bbox": [ + 108, + 102, + 504, + 232 + ], + "spans": [ + { + "bbox": [ + 108, + 102, + 504, + 232 + ], + "type": "table", + "html": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α646412812864643264
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-45e-51e-35e-55e-41e-45e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs3
Whereq, k, v, o, gate, up, down
", + "image_path": "019a56ebb137460d7b3baa0a71dcef549140a94813ddb15f6ec50420b41375a0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 108, + 263, + 504, + 393 + ], + "blocks": [ + { + "bbox": [ + 136, + 243, + 473, + 256 + ], + "lines": [ + { + "bbox": [ + 136, + 243, + 473, + 256 + ], + "spans": [ + { + "bbox": [ + 136, + 243, + 473, + 256 + ], + "type": "text", + "content": "Table 6: Hyperparameter settings for LoRI on the code dataset CodeAlpaca." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 263, + 504, + 393 + ], + "lines": [ + { + "bbox": [ + 108, + 263, + 504, + 393 + ], + "spans": [ + { + "bbox": [ + 108, + 263, + 504, + 393 + ], + "type": "table", + "html": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs2
Whereq, k, v, o, gate, up, down
", + "image_path": "83373ae876f24c07be676bc904237ed38a4c1ac6f91be338af486c4a228dd6ab.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 108, + 424, + 504, + 553 + ], + "blocks": [ + { + "bbox": [ + 140, + 403, + 469, + 417 + ], + "lines": [ + { + "bbox": [ + 140, + 403, + 469, + 417 + ], + "spans": [ + { + "bbox": [ + 140, + 403, + 469, + 417 + ], + "type": "text", + "content": "Table 7: Hyperparameter settings for LoRI on the safety dataset Saferpaca." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 424, + 504, + 553 + ], + "lines": [ + { + "bbox": [ + 108, + 424, + 504, + 553 + ], + "spans": [ + { + "bbox": [ + 108, + 424, + 504, + 553 + ], + "type": "table", + "html": "
MethodLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-SLoRI-DLoRI-S
Base ModelLlama-3Llama-3Llama-3Llama-3MistralMistralMistralMistral
Rank r3232646432326464
α64641281286464128128
Sparsity Ratio00.900.900.900.9
Learning Rate5e-55e-41e-51e-45e-55e-41e-51e-4
Dropout0.05
OptimizerAdamW
Batch size32
Warmup Steps0
Epochs1
Whereq, k, v, o, gate, up, down
", + "image_path": "f880602a047217b3862f3cabe79e6da7bcf3dc974df10a60d32fcc512581142f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 108, + 584, + 504, + 639 + ], + "blocks": [ + { + "bbox": [ + 129, + 564, + 479, + 578 + ], + "lines": [ + { + "bbox": [ + 129, + 564, + 479, + 578 + ], + "spans": [ + { + "bbox": [ + 129, + 564, + 479, + 578 + ], + "type": "text", + "content": "Table 8: Hyperparameter settings for merging four adapters using Llama-3-8B." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 584, + 504, + 639 + ], + "lines": [ + { + "bbox": [ + 108, + 584, + 504, + 639 + ], + "spans": [ + { + "bbox": [ + 108, + 584, + 504, + 639 + ], + "type": "table", + "html": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
", + "image_path": "c7ed5a53c1b7e2f2aed88b13b5470bca2d55f38fd8dc214c3eb9192c77c5cf11.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 108, + 670, + 504, + 727 + ], + "blocks": [ + { + "bbox": [ + 132, + 649, + 477, + 662 + ], + "lines": [ + { + "bbox": [ + 132, + 649, + 477, + 662 + ], + "spans": [ + { + "bbox": [ + 132, + 649, + 477, + 662 + ], + "type": "text", + "content": "Table 9: Hyperparameter settings for merging four adapters using Mistral-7B." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 670, + 504, + 727 + ], + "lines": [ + { + "bbox": [ + 108, + 670, + 504, + 727 + ], + "spans": [ + { + "bbox": [ + 108, + 670, + 504, + 727 + ], + "type": "table", + "html": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelMistralMistralMistralMistralMistralMistralMistralMistralMistral
Weights0.40.40.40.40.40.40.40.30.3
Density--0.30.70.7----
", + "image_path": "bcecb925b480f17b7a5d22c03c23ec8dfce0886aed9b4aa7b0d70110ed4695d0.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 100, + 504, + 155 + ], + "blocks": [ + { + "bbox": [ + 126, + 79, + 484, + 92 + ], + "lines": [ + { + "bbox": [ + 126, + 79, + 484, + 92 + ], + "spans": [ + { + "bbox": [ + 126, + 79, + 484, + 92 + ], + "type": "text", + "content": "Table 10: Hyperparameter settings for merging three adapters using Llama-3-8B." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 100, + 504, + 155 + ], + "lines": [ + { + "bbox": [ + 108, + 100, + 504, + 155 + ], + "spans": [ + { + "bbox": [ + 108, + 100, + 504, + 155 + ], + "type": "table", + "html": "
Adaptation MergingLoRA ConcatLoRA LinearLoRA MagnitudeLoRA TIESLoRA DARELoRI-D ConcatLoRI-D LinearLoRI-S ConcatLoRI-S Linear
Base ModelLlama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3Llama-3
Weights0.50.50.50.50.50.50.50.40.4
Density--0.30.70.7----
", + "image_path": "8e018c516640803a315887d386a51f0ed1a9aa1e20c0fafea96beb17d736aeb0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 108, + 216, + 504, + 339 + ], + "blocks": [ + { + "bbox": [ + 104, + 172, + 504, + 206 + ], + "lines": [ + { + "bbox": [ + 104, + 172, + 504, + 206 + ], + "spans": [ + { + "bbox": [ + 104, + 172, + 504, + 206 + ], + "type": "text", + "content": "Table 11: Performance comparison of different adaptation methods on eight NLU benchmarks using Llama-3 with " + }, + { + "bbox": [ + 104, + 172, + 504, + 206 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 172, + 504, + 206 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 216, + 504, + 339 + ], + "lines": [ + { + "bbox": [ + 108, + 216, + 504, + 339 + ], + "spans": [ + { + "bbox": [ + 108, + 216, + 504, + 339 + ], + "type": "table", + "html": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA84M (1.03%)76.389.882.783.491.788.495.888.787.1
VeRA1.38M (0.02%)64.481.862.667.385.760.978.556.969.8
IA31.70M (0.02%)68.684.874.577.689.475.790.675.079.5
LoRA-FA44M (0.54%)74.089.683.383.893.488.696.187.487.0
AdaLoRA84M (1.03%)75.689.282.483.191.087.894.487.686.4
rsLoRA84M (1.03%)72.884.878.876.087.085.091.082.882.3
PiSSA84M (1.03%)68.184.478.275.185.182.889.382.880.7
LoRA+84M (1.03%)67.080.378.570.182.381.588.979.778.5
DoRA85M (1.05%)75.989.882.783.593.287.995.388.287.1
LoRI-D44M (0.54%)76.489.082.784.293.688.595.987.987.3
LoRI-S4.4M (0.05%)75.289.282.883.892.688.495.287.586.8
", + "image_path": "9c9dd3534fb8ab88ff1d79ab0f5a7a4b19e18e497f8aaf38ff907498b88bc0be.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 367, + 307, + 380 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 367, + 307, + 380 + ], + "spans": [ + { + "bbox": [ + 105, + 367, + 307, + 380 + ], + "type": "text", + "content": "E Additional Experimental Results" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 398, + 332, + 410 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 398, + 332, + 410 + ], + "spans": [ + { + "bbox": [ + 105, + 398, + 332, + 410 + ], + "type": "text", + "content": "E.1 Comparison with Additional PEFT Methods" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 423, + 504, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 423, + 504, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 423, + 504, + 491 + ], + "type": "text", + "content": "To provide a comprehensive benchmark, we evaluate LoRI against several widely adopted parameter-efficient fine-tuning (PEFT) methods, including VeRA (Kopiczko et al., 2023), IA3 (Liu et al., 2022), LoRA-FA (Zhang et al., 2023b), AdaLoRA (Zhang et al., 2023d), rsLoRA (Kalajdzievski, 2023), PiSSA (Meng et al., 2024), LoRA+ (Hayou et al., 2024), and DoRA (Liu et al., 2024). The results, presented in Tables 11 and 12, demonstrate that our proposed methods are highly effective." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "text", + "content": "LoRI-D, which uses 44M trainable parameters (0.54% of the full model and half of LoRA's), consistently achieves state-of-the-art performance, particularly on NLU and code generation benchmarks. LoRI-S, despite its aggressive sparsity (0.05% of the full model and 5% of LoRA's), remains highly competitive and often surpasses other PEFT methods. While VeRA and IA3 are more parameter-efficient, their performance is substantially lower than LoRI-S. Despite this efficiency, LoRI-D and LoRI-S deliver comparable – and often superior – performance across NLU, math, code, and safety domains. These results underscore two key insights: (1) effective adaptation does not require updating the projection matrices " + }, + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "text", + "content": ", as demonstrated by LoRI-D; and (2) the matrices " + }, + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 495, + 506, + 606 + ], + "type": "text", + "content": " contains significant redundancy that can be effectively pruned, as shown by LoRI-S." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 629, + 247, + 640 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 629, + 247, + 640 + ], + "spans": [ + { + "bbox": [ + 105, + 629, + 247, + 640 + ], + "type": "text", + "content": "E.2 Results with Rank " + }, + { + "bbox": [ + 105, + 629, + 247, + 640 + ], + "type": "inline_equation", + "content": "r = 64" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 654, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 504, + 733 + ], + "type": "text", + "content": "We evaluate several adaptation methods using a higher adapter rank of " + }, + { + "bbox": [ + 104, + 654, + 504, + 733 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 654, + 504, + 733 + ], + "type": "text", + "content": " across a diverse set of tasks. This allows for more expressive adapter representations while still maintaining efficiency compared to full fine-tuning. Table 13 presents performance on eight natural language understanding (NLU) benchmarks, while Table 14 includes results on GSM8K (math), HumanEval (code), and HEx-PHI (safety). Across Llama-3 and Mistral models, LoRI-D and LoRI-S consistently perform competitively, often outperforming larger adapter methods like LoRA and DoRA, while using fewer parameters." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 137, + 133, + 476, + 284 + ], + "blocks": [ + { + "bbox": [ + 104, + 90, + 504, + 125 + ], + "lines": [ + { + "bbox": [ + 104, + 90, + 504, + 125 + ], + "spans": [ + { + "bbox": [ + 104, + 90, + 504, + 125 + ], + "type": "text", + "content": "Table 12: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEX-PHI (safety) benchmarks using Llama-3 with " + }, + { + "bbox": [ + 104, + 90, + 504, + 125 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 90, + 504, + 125 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 137, + 133, + 476, + 284 + ], + "lines": [ + { + "bbox": [ + 137, + 133, + 476, + 284 + ], + "spans": [ + { + "bbox": [ + 137, + 133, + 476, + 284 + ], + "type": "table", + "html": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
FFT8.03G (100%)58.830.539.341.794.8
LoRA84M (1.03%)64.434.746.450.891.6
VeRA1.38M (0.02%)30.632.445.150.974.7
IA31.70M (0.02%)48.032.745.651.585.4
LoRA-FA44M (0.54%)64.842.957.564.294.1
AdaLoRA84M (1.03%)63.333.545.049.491.9
rsLoRA84M (1.03%)61.328.435.538.398.1
PiSSA84M (1.03%)61.332.040.343.397.8
LoRA+84M (1.03%)61.733.042.746.098.8
DoRA85M (1.05%)65.433.144.048.693.6
LoRI-D44M (0.54%)63.243.257.663.292.8
LoRI-S4.4M (0.05%)62.741.354.459.693.8
", + "image_path": "374ef3c8e56f616defa3b1ca41b03317a863716e9f592e6052743ef31155dda5.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 106, + 353, + 504, + 485 + ], + "blocks": [ + { + "bbox": [ + 104, + 310, + 504, + 345 + ], + "lines": [ + { + "bbox": [ + 104, + 310, + 504, + 345 + ], + "spans": [ + { + "bbox": [ + 104, + 310, + 504, + 345 + ], + "type": "text", + "content": "Table 13: Performance comparison of different adaptation methods on eight natural language understanding (NLU) benchmarks using Llama-3 and Mistral with " + }, + { + "bbox": [ + 104, + 310, + 504, + 345 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 310, + 504, + 345 + ], + "type": "text", + "content": ". **Bold indicates the best-performing method, and underline indicates the second-best.**" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 353, + 504, + 485 + ], + "lines": [ + { + "bbox": [ + 106, + 353, + 504, + 485 + ], + "spans": [ + { + "bbox": [ + 106, + 353, + 504, + 485 + ], + "type": "table", + "html": "
Method# Params (%)BoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Llama-3-8B
FFT8.03G (100%)73.886.877.676.787.684.193.285.183.1
LoRA168M (2.05%)75.289.081.282.392.489.195.388.286.6
DoRA169M (2.06%)76.489.082.082.692.387.595.187.386.5
LoRI-D88M (1.07%)75.890.482.783.392.688.695.987.487.1
LoRI-S8.8M (0.11%)76.590.281.983.593.887.596.287.287.1
Mistral-7B
FFT7.24G (100%)74.184.678.079.390.588.494.483.584.1
LoRA168M (2.26%)77.490.283.584.093.089.395.689.487.8
DoRA169M (2.28%)76.090.683.583.392.889.695.787.687.4
LoRI-D88M (1.18%)75.990.783.782.092.190.096.487.887.3
LoRI-S8.8M (0.12%)74.290.783.583.092.689.595.889.587.3
", + "image_path": "51115363e44e71788c130af3a59a40679a0f7fc02dfb30aac2bca32a1d13f5b2.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 143, + 564, + 471, + 720 + ], + "blocks": [ + { + "bbox": [ + 104, + 510, + 504, + 555 + ], + "lines": [ + { + "bbox": [ + 104, + 510, + 504, + 555 + ], + "spans": [ + { + "bbox": [ + 104, + 510, + 504, + 555 + ], + "type": "text", + "content": "Table 14: Performance comparison of different adaptation methods on GSM8K (math), HumanEval (code), and HEx-PHI (safety) benchmarks using Llama-3 and Mistral with " + }, + { + "bbox": [ + 104, + 510, + 504, + 555 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 510, + 504, + 555 + ], + "type": "text", + "content": ". **Bold indicates the best-performing method, and **underline indicates the second-best.**" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 564, + 471, + 720 + ], + "lines": [ + { + "bbox": [ + 143, + 564, + 471, + 720 + ], + "spans": [ + { + "bbox": [ + 143, + 564, + 471, + 720 + ], + "type": "table", + "html": "
Method# Params (%)GSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Llama-3-8B
FFT8.03G (100%)58.830.539.341.794.8
LoRA168M (2.05%)63.938.652.959.294.1
DoRA169M (2.06%)63.839.453.659.793.4
LoRI-D88M (1.07%)63.841.955.460.396.6
LoRI-S8.8M (0.11%)61.844.157.462.496.3
Mistral-7B
FFT7.24G (100%)55.530.539.341.794.1
LoRA168M (2.26%)56.733.943.146.995.9
DoRA169M (2.28%)57.832.943.347.296.6
LoRI-D88M (1.18%)58.233.343.647.390.9
LoRI-S8.8M (0.12%)58.432.142.246.393.4
", + "image_path": "04ba35cb4761e76d3a6c939ca6c0974b80c68b7418d58fb1f2388d3f92ce31bd.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 124, + 144, + 488, + 280 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "text", + "content": "Table 15: Comparison of merging methods for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank " + }, + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 79, + 506, + 135 + ], + "type": "text", + "content": ". Bold indicates the best-performing method, and underline indicates the second-best." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 124, + 144, + 488, + 280 + ], + "lines": [ + { + "bbox": [ + 124, + 144, + 488, + 280 + ], + "spans": [ + { + "bbox": [ + 124, + 144, + 488, + 280 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
ConcatLoRA82.552.432.340.844.175.6
LinearLoRA81.448.033.141.643.976.6
MagnitudeLoRA77.542.732.741.845.680.9
TIESLoRA31.323.532.040.243.581.9
DARELoRA76.143.032.041.044.683.4
ConcatLoRI-D79.352.434.442.845.583.8
LinearLoRI-D78.150.535.242.745.579.7
ConcatLoRI-S79.246.133.341.645.979.4
LinearLoRI-S75.540.328.836.039.683.1
", + "image_path": "5b275f33c278c822894b05d2926f30adb0610e3978ae0b990b1e4ca4dbdb6824.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 108, + 343, + 504, + 456 + ], + "blocks": [ + { + "bbox": [ + 104, + 289, + 504, + 335 + ], + "lines": [ + { + "bbox": [ + 104, + 289, + 504, + 335 + ], + "spans": [ + { + "bbox": [ + 104, + 289, + 504, + 335 + ], + "type": "text", + "content": "Table 16: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 289, + 504, + 335 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 289, + 504, + 335 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 343, + 504, + 456 + ], + "lines": [ + { + "bbox": [ + 108, + 343, + 504, + 456 + ], + "spans": [ + { + "bbox": [ + 108, + 343, + 504, + 456 + ], + "type": "table", + "html": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA73.989.181.181.492.483.094.484.585.0
LinearLoRA73.788.881.180.791.684.493.984.184.8
MagnitudeLoRA72.087.176.879.491.781.590.476.481.9
TIESLoRA68.283.867.369.587.869.273.361.472.6
DARELoRA70.785.074.177.590.776.686.871.079.1
ConcatLoRI-D74.087.777.881.092.481.092.778.983.2
LinearLoRI-D73.787.776.780.392.180.192.077.782.5
ConcatLoRI-S71.886.276.179.291.578.689.876.381.2
LinearLoRI-S70.785.375.178.090.875.086.571.379.1
", + "image_path": "5a39e390791af67f5fee2c41cc9b9bf7cf985c272f014076e5e6ca15c1a4a159.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 475, + 239, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 475, + 239, + 488 + ], + "spans": [ + { + "bbox": [ + 105, + 475, + 239, + 488 + ], + "type": "text", + "content": "E.3 Merging Four Adapters" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 495, + 504, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 504, + 574 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 504, + 574 + ], + "type": "text", + "content": "To support multi-task learning within a unified model, we study the merging of four task-specific adapters using various strategies. Table 15 reports results using Mistral-7B across a range of tasks. Additionally, Tables 16 and 17 break down the performance of NLU on individual benchmarks using Llama-3 and Mistral, respectively. We compare merging methods such as concatenated merging, linear merging, magnitude pruning, TIES, and DARE. LoRI-based approaches demonstrate strong performance and stability when merging multiple adapters." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 587, + 244, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 587, + 244, + 601 + ], + "spans": [ + { + "bbox": [ + 105, + 587, + 244, + 601 + ], + "type": "text", + "content": "E.4 Merging Three Adapters" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 608, + 504, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 608, + 504, + 665 + ], + "spans": [ + { + "bbox": [ + 104, + 608, + 504, + 665 + ], + "type": "text", + "content": "We further evaluate the merging of three adapters to understand performance when adapting to a smaller set of tasks. Tables 18 and 19 summarize the results for Llama-3 across different benchmarks. Similar to the four-task setting, LoRI-D remains a strong performer, often exceeding the performance of LoRA. These results highlight that LoRI-based methods are effective with varying levels of task diversity." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 677, + 285, + 690 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 285, + 690 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 285, + 690 + ], + "type": "text", + "content": "E.5 Pruning-Based Merging Methods" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 734 + ], + "type": "text", + "content": "Finally, we explore pruning-based merging methods, which aim to compress and combine multiple adapters by selectively retaining important weights. We focus on three methods: magnitude pruning, TIES, and DARE. Results are reported for merging both four-adapter" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 152, + 504, + 264 + ], + "blocks": [ + { + "bbox": [ + 104, + 97, + 504, + 142 + ], + "lines": [ + { + "bbox": [ + 104, + 97, + 504, + 142 + ], + "spans": [ + { + "bbox": [ + 104, + 97, + 504, + 142 + ], + "type": "text", + "content": "Table 17: Comparison of merging methods for combining four adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Mistral-7B, rank " + }, + { + "bbox": [ + 104, + 97, + 504, + 142 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 97, + 504, + 142 + ], + "type": "text", + "content": ". Bold indicates the best-performing method, and underline indicates the second-best." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 152, + 504, + 264 + ], + "lines": [ + { + "bbox": [ + 108, + 152, + 504, + 264 + ], + "spans": [ + { + "bbox": [ + 108, + 152, + 504, + 264 + ], + "type": "table", + "html": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D75.990.683.083.691.988.495.987.487.1
ConcatLoRA69.088.078.179.990.984.292.477.882.5
LinearLoRA69.286.977.978.590.282.191.575.181.4
MagnitudeLoRA68.784.974.475.989.177.585.664.177.5
TIESLoRA18.469.840.714.021.920.114.650.931.3
DARELoRA69.484.373.174.288.974.382.661.876.1
ConcatLoRI-D68.485.975.676.689.481.385.971.179.3
LinearLoRI-D66.386.074.975.388.980.885.068.078.1
ConcatLoRI-S72.685.474.676.589.780.186.068.979.2
LinearLoRI-S67.683.872.073.088.374.680.964.375.5
", + "image_path": "2782a629cbad07f42a8fe9ab46b398148c7fe5252ab8de9463f9161a6a55fdc6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 148, + 369, + 465, + 506 + ], + "blocks": [ + { + "bbox": [ + 104, + 304, + 504, + 361 + ], + "lines": [ + { + "bbox": [ + 104, + 304, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 504, + 361 + ], + "type": "text", + "content": "Table 18: Comparison of merging methods for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 304, + 504, + 361 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 304, + 504, + 361 + ], + "type": "text", + "content": ". Bold indicates the best-performing method, and underline indicates the second-best." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 148, + 369, + 465, + 506 + ], + "lines": [ + { + "bbox": [ + 148, + 369, + 465, + 506 + ], + "spans": [ + { + "bbox": [ + 148, + 369, + 465, + 506 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
ConcatLoRA86.454.513.019.821.8
LinearLoRA86.151.98.814.516.7
MagnitudeLoRA83.852.023.337.443.0
TIESLoRA79.426.936.348.753.7
DARELoRA81.153.336.049.553.9
ConcatLoRI-D84.859.641.556.461.6
LinearLoRI-D84.657.638.351.656.8
ConcatLoRI-S83.351.831.244.649.8
LinearLoRI-S81.041.726.640.044.6
", + "image_path": "df2db27ced015225db70179c581e419f0d47043e07a3ed6e710165c4c3fddaa2.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 108, + 600, + 504, + 712 + ], + "blocks": [ + { + "bbox": [ + 104, + 546, + 504, + 592 + ], + "lines": [ + { + "bbox": [ + 104, + 546, + 504, + 592 + ], + "spans": [ + { + "bbox": [ + 104, + 546, + 504, + 592 + ], + "type": "text", + "content": "Table 19: Comparison of merging methods for combining three adapters on eight NLU benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 546, + 504, + 592 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 546, + 504, + 592 + ], + "type": "text", + "content": ". **Bold** indicates the best-performing method, and **underline** indicates the second-best." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 600, + 504, + 712 + ], + "lines": [ + { + "bbox": [ + 108, + 600, + 504, + 712 + ], + "spans": [ + { + "bbox": [ + 108, + 600, + 504, + 712 + ], + "type": "table", + "html": "
MergingAdaptationBoolQPIQASIQAARC-cARC-eOBQAHellaSWinoGAvg.
Single-TaskLoRI-D76.489.082.784.293.688.595.987.987.3
ConcatLoRA74.789.681.882.993.786.295.886.886.4
LinearLoRA73.989.681.481.993.585.595.687.186.1
MagnitudeLoRA72.287.278.981.292.283.293.082.483.8
TIESLoRA69.584.874.078.491.277.488.871.479.4
DARELoRA71.085.675.879.591.078.890.776.281.1
ConcatLoRI-D73.889.079.881.093.083.094.684.084.8
LinearLoRI-D74.188.480.281.392.982.194.183.684.6
ConcatLoRI-S70.387.279.180.892.482.193.281.383.3
LinearLoRI-S61.586.478.079.591.780.891.378.581.0
", + "image_path": "5d742e4240f4550b50f8a045f6eade200f56a5c2028c9f9964e737210a4a0f04.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 124, + 144, + 489, + 289 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": "Table 20: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": ". Bold indicates the best-performing method within each group." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 124, + 144, + 489, + 289 + ], + "lines": [ + { + "bbox": [ + 124, + 144, + 489, + 289 + ], + "spans": [ + { + "bbox": [ + 124, + 144, + 489, + 289 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM&KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.292.8
MagnitudeLoRA81.950.324.136.742.474.4
MagnitudeLoRI-D84.350.533.345.251.485.9
MagnitudeLoRI-S76.435.225.236.541.068.4
TIESLoRA72.624.032.546.351.777.8
TIESLoRI-D79.138.040.354.659.885.3
TIESLoRI-S70.425.934.648.453.277.8
DARELoRA79.148.934.148.753.574.1
DARELoRI-D83.452.035.451.357.881.9
DARELoRI-S73.427.234.848.153.575.3
", + "image_path": "3e3cf304781f00eeed6139ed70a45546dc84631bdaf0303343be9afe0bde0460.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 124, + 364, + 489, + 508 + ], + "blocks": [ + { + "bbox": [ + 104, + 298, + 506, + 355 + ], + "lines": [ + { + "bbox": [ + 104, + 298, + 506, + 355 + ], + "spans": [ + { + "bbox": [ + 104, + 298, + 506, + 355 + ], + "type": "text", + "content": "Table 21: Comparison of magnitude pruning, TIES, and DARE for combining four adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Mistral-7B, rank " + }, + { + "bbox": [ + 104, + 298, + 506, + 355 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 298, + 506, + 355 + ], + "type": "text", + "content": ". Bold indicates the best-performing method within each group." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 124, + 364, + 489, + 508 + ], + "lines": [ + { + "bbox": [ + 124, + 364, + 489, + 508 + ], + "spans": [ + { + "bbox": [ + 124, + 364, + 489, + 508 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM8KHumanEvalHEx-PHI
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.158.033.842.045.194.7
MagnitudeLoRA77.542.732.741.845.680.9
MagnitudeLoRI-D76.041.529.036.038.779.4
MagnitudeLoRI-S70.532.428.136.139.377.5
TIESLoRA31.323.532.040.243.581.9
TIESLoRI-D65.045.435.344.547.868.4
TIESLoRI-S67.832.928.637.240.878.4
DARELoRA76.143.032.041.044.683.4
DARELoRI-D76.242.329.237.140.789.1
DARELoRI-S71.934.329.240.544.985.0
", + "image_path": "d070a3c1b9f7ec4b03797f93dae14ef188a5af61d1ac4e5037057f00332a5fe2.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "spans": [ + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "text", + "content": "(Tables 20 and 21) and three-adapter (Table 22) settings, using Llama-3 and Mistral as base models. LoRI-D consistently achieves strong performance across all pruning-based merging methods. However, the performance of LoRI-S is somewhat lower in these settings. This is because pruning-based methods operate on the dense " + }, + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "text", + "content": " matrices but not on the sparse " + }, + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 527, + 506, + 594 + ], + "type": "text", + "content": " matrices. This mismatch leads to an inconsistent pruning scheme, which can result in a loss of effectiveness." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 610, + 284, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 610, + 284, + 624 + ], + "spans": [ + { + "bbox": [ + 105, + 610, + 284, + 624 + ], + "type": "text", + "content": "F Additional Ablation Studies" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "spans": [ + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "text", + "content": "Figure 5 presents GSM8K accuracy across a grid of sparsity ratios and learning rates using Mistral-7B with rank " + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "text", + "content": ". We observe that sparse adapters require larger learning rates to train effectively. In particular, models with high sparsity (e.g., above " + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "text", + "content": ") perform best with a learning rate of " + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "inline_equation", + "content": "10^{-4}" + }, + { + "bbox": [ + 104, + 635, + 506, + 694 + ], + "type": "text", + "content": " or higher. This suggests that stronger optimization is necessary to compensate for limited capacity in sparse adapters." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 698, + 505, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 505, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 505, + 734 + ], + "type": "text", + "content": "In Figure 6, we analyze how sparsity is distributed across layers and projections when enforcing " + }, + { + "bbox": [ + 104, + 698, + 505, + 734 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 698, + 505, + 734 + ], + "type": "text", + "content": " global sparsity on GSM8K. We find that feedforward (FFN) projections tend to retain more parameters – i.e., they exhibit lower sparsity – than self-attention projections." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 148, + 144, + 465, + 289 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": "Table 22: Comparison of magnitude pruning, TIES, and DARE for combining three adapters, evaluated on their respective benchmarks. The best-performing single-task adapter, LoRI-D, is used as the single-task baseline. Results for NLU are averaged over eight tasks. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": ". Bold indicates the best-performing method within each group." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 148, + 144, + 465, + 289 + ], + "lines": [ + { + "bbox": [ + 148, + 144, + 465, + 289 + ], + "spans": [ + { + "bbox": [ + 148, + 144, + 465, + 289 + ], + "type": "table", + "html": "
MergingAdaptationNLUGSM8KHumanEval
Pass@1Pass@5Pass@10
Single-TaskLoRI-D87.363.243.257.663.2
MagnitudeLoRA83.852.023.337.443.0
MagnitudeLoRI-D84.653.734.848.954.7
MagnitudeLoRI-S77.836.625.538.843.8
TIESLoRA79.426.936.348.753.7
TIESLoRI-D82.142.239.252.757.7
TIESLoRI-S73.835.234.847.952.5
DARELoRA81.153.336.049.553.9
DARELoRI-D84.055.233.845.851.8
DARELoRI-S75.336.636.248.953.4
", + "image_path": "76404041c0f0201eb41da8a2571b926d7fc6c696f21dd849b8ed8f5ef3dab48a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 159, + 302, + 440, + 475 + ], + "blocks": [ + { + "bbox": [ + 159, + 302, + 440, + 475 + ], + "lines": [ + { + "bbox": [ + 159, + 302, + 440, + 475 + ], + "spans": [ + { + "bbox": [ + 159, + 302, + 440, + 475 + ], + "type": "image", + "image_path": "decce04358f9b5a391f9c16d358807e18ddb72362e0e9eeae42a1176ee7a28b3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 486, + 504, + 510 + ], + "lines": [ + { + "bbox": [ + 104, + 486, + 504, + 510 + ], + "spans": [ + { + "bbox": [ + 104, + 486, + 504, + 510 + ], + "type": "text", + "content": "Figure 5: GSM8K accuracy under different sparsity ratios and learning rates. Base model: Mistral-7B, rank " + }, + { + "bbox": [ + 104, + 486, + 504, + 510 + ], + "type": "inline_equation", + "content": "r = 64" + }, + { + "bbox": [ + 104, + 486, + 504, + 510 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 530, + 504, + 566 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 530, + 504, + 566 + ], + "spans": [ + { + "bbox": [ + 104, + 530, + 504, + 566 + ], + "type": "text", + "content": "This indicates that FFN components are more critical for effective adaptation. Additionally, sparsity decreases toward the top of the network, suggesting that higher layers are more important for task-specific specialization." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 569, + 506, + 627 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 569, + 506, + 627 + ], + "spans": [ + { + "bbox": [ + 104, + 569, + 506, + 627 + ], + "type": "text", + "content": "Lastly, Figure 7 explores the effect of merging weights when combining three LoRI-S adapters using concatenated and linear merging. We find a noticeable trade-off between performance on code tasks and other domains (e.g., NLU and math). Higher merging weights can improve NLU performance but tend to degrade performance on code, highlighting the challenge of balancing generalization and specialization in multi-task settings." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 207, + 144, + 405, + 340 + ], + "blocks": [ + { + "bbox": [ + 207, + 144, + 405, + 340 + ], + "lines": [ + { + "bbox": [ + 207, + 144, + 405, + 340 + ], + "spans": [ + { + "bbox": [ + 207, + 144, + 405, + 340 + ], + "type": "image", + "image_path": "9ff585ba374aad4863f066455f922d0c50d62e831177f2209d9ca1607ac1bf5f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "lines": [ + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "spans": [ + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "type": "text", + "content": "Figure 6: Sparsity ratios across layers and projections under a " + }, + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "type": "text", + "content": " sparsity on GSM8K. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 350, + 504, + 373 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 115, + 506, + 302, + 617 + ], + "blocks": [ + { + "bbox": [ + 115, + 506, + 302, + 617 + ], + "lines": [ + { + "bbox": [ + 115, + 506, + 302, + 617 + ], + "spans": [ + { + "bbox": [ + 115, + 506, + 302, + 617 + ], + "type": "image", + "image_path": "20bdc69ee88bcf55951fd1160fe65f91d273cf3619437a7678ec93a6c498a9d1.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 133, + 624, + 285, + 635 + ], + "lines": [ + { + "bbox": [ + 133, + 624, + 285, + 635 + ], + "spans": [ + { + "bbox": [ + 133, + 624, + 285, + 635 + ], + "type": "text", + "content": "(a) Concatnated merging with LoRI-S." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 308, + 506, + 496, + 617 + ], + "blocks": [ + { + "bbox": [ + 308, + 506, + 496, + 617 + ], + "lines": [ + { + "bbox": [ + 308, + 506, + 496, + 617 + ], + "spans": [ + { + "bbox": [ + 308, + 506, + 496, + 617 + ], + "type": "image", + "image_path": "22dfc59e1466fbdfc57293573fbf453e7df828c4bf384de19cc45ad356595b14.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 337, + 624, + 465, + 635 + ], + "lines": [ + { + "bbox": [ + 337, + 624, + 465, + 635 + ], + "spans": [ + { + "bbox": [ + 337, + 624, + 465, + 635 + ], + "type": "text", + "content": "(b) Linear merging with LoRI-S." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 643, + 504, + 667 + ], + "lines": [ + { + "bbox": [ + 104, + 643, + 504, + 667 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 504, + 667 + ], + "type": "text", + "content": "Figure 7: Ablation study on the effect of merging weights when combining three adapters. Base model: Llama-3-8B, rank " + }, + { + "bbox": [ + 104, + 643, + 504, + 667 + ], + "type": "inline_equation", + "content": "r = 32" + }, + { + "bbox": [ + 104, + 643, + 504, + 667 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_content_list.json b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dc7b777dd5ba470edd3b7e0c06c278d75adf1f43 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_content_list.json @@ -0,0 +1,2646 @@ +[ + { + "type": "text", + "text": "AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time Computation", + "text_level": 1, + "bbox": [ + 171, + 98, + 823, + 142 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tuhin Chakrabarty $^{1*}$ , Philippe Laban $^{2*}$ , Chien-Sheng Wu $^{1}$", + "bbox": [ + 179, + 166, + 620, + 183 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Salesforce AI Research $^{2}$ Microsoft Research", + "bbox": [ + 183, + 183, + 517, + 196 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{tuhin.chakr,wu.jason}@salesforce.com,plaban@microsoft.com", + "bbox": [ + 183, + 198, + 666, + 212 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 457, + 247, + 540, + 263 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve the writing quality of AI-generated text? Writing quality assessment has received less attention from the community, in part because it is fundamentally subjective and requires expertise. We first introduce the Writing Quality Benchmark (WQ) by consolidating five writing-preference datasets into 4,729 writing quality judgments. Our experiments show that most of the competitive baselines, including state-of-the-art LLMs that excel at reasoning tasks, barely outperform random baselines on WQ. We then train specialized Writing Quality Reward Models (WQRM) of various sizes for writing quality assessment that demonstrate strong generalization on four out-of-distribution test sets and $74\\%$ accuracy on the WQ benchmark. To further show WQRM's practical benefits during inference, we leverage additional test-time compute to generate and rank multiple candidate revisions, allowing us to select higher-quality outputs from an initial draft. Human evaluation with 9 experienced writers confirm that WQRM-based selection produces writing samples preferred by experts $66\\%$ overall, and $72.2\\%$ when the reward gap is larger than 1 point. We release our datasets and models to encourage community engagement with writing quality assessment and development of AI writing systems better aligned with human preferences.", + "bbox": [ + 228, + 279, + 769, + 602 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 625, + 320, + 642 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Writing is one of the most important pillars of education, enabling learners to critically engage with the topics they study. In *The Rise of Writing Brandt* (2014) argues that the \"information economy's insatiable demand for symbol manipulation—'knowledge work'—has forced many workers to reorient their labor around the production of prose\" (Laquintano & Vee, 2024). Generative AI tools have further blurred these boundaries, especially around how labor and writing practices are evolving across both academic (Kobak et al., 2024; Lee et al., 2025) and professional contexts (Liang et al., 2025). Often awkward and jarring to read, low-effort text generated by AI is now flooding web browsers and social-media platforms much like spam in old inboxes (Herrman, 2024a; Knibbs, 2024c;d;b;a). This neologistic term of revulsion is often referred to as \"A.I. slop\" (Herrman, 2024b). Extensive social experimentation with ChatGPT has invited criticism on social media and in the popular news platforms that its writing has a disembodied \"robovoice\". This has led to humanization methods (Wang et al., 2024) and even start-ups such as StealthGPT or HumanizeAI, which explicitly attempt to make AI-generated text more humanlike.", + "bbox": [ + 169, + 656, + 826, + 852 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Despite LLMs showing impressive performance in math and coding, their ability to write high-quality text has been rather pedestrian. Recent work from Chakrabarty et al. (2024b) shows how text generated from widely used LLMs are often rife with clichés, purple prose,", + "bbox": [ + 169, + 858, + 826, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07532v3 [cs.CL] 12 Aug 2025", + "bbox": [ + 22, + 275, + 60, + 724 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "**Equal contribution.", + "bbox": [ + 191, + 909, + 334, + 924 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg", + "image_caption": [ + "Figure 1: Our three key contributions: (1) A new writing quality benchmark for creative writing evaluation, (2) Writing Quality Reward Models (WQRM) that perform strongly on this benchmark, and (3) Expert validation confirming WQRM aligns with professionals." + ], + "image_footnote": [], + "bbox": [ + 210, + 104, + 787, + 320 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "poor sentence structure, and unnecessary exposition. This stems from several challenges. Unlike math or coding, writing lacks verifiable rewards. While it would be possible to train a model to write better text by having humans label examples of \"good\" and \"bad\" writing, it is challenging due to the required expertise. Self-evaluation using LLMs has proven useful in reward modeling and constitutional AI (Bai et al., 2022), but relying on uncalibrated humans or LLMs for feedback (Lee et al., 2023; Gao et al., 2024) on subjective tasks like writing can lead to reward hacking (Pan et al., 2024) and alignment issues. Recent work from Panickssery et al. (2024) shows the self-aggrandizing nature of LLMs, as evidenced in Table 3 where they prefer their own writing over Nobel Prize winners' work. For the purpose of this paper we define good writing quality as writing that doesn't contain disproportionate amount of peculiar words or phrases, has fewer cliches or hackneyed expressions, is not unnecessarily ornamental as well as doesn't have a overly saccharine and polished tone or voice.", + "bbox": [ + 169, + 383, + 826, + 565 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The surge in AI writing assistance demands urgent alignment of AI-generated text with human preferences. Recent work from Gooding et al. (2025) show how LLMs struggle to select high-quality writing actions as judged by human experts, often treating suboptimal and optimal interventions as equally acceptable. They highlight the need for models to better assess the quality and impact of suggested actions, both during generation and across multi-step refinement. Binary preference feedback between paired examples is the most common alignment method for LLMs (Christiano et al., 2017), but it has a significant drawback. The paired outputs may differ in several ways and could be equally worse in terms of quality (Casper et al., 2023; Lambert & Calandra, 2023).1 Recent work from Chakrabarty et al. (2024b) shows how identifying and editing problematic response segments effectively improves AI alignment. This also reflects the Reviewing phase in the cognitive process model of writing (Hayes et al., 1987), where humans evaluate and revise text. They release LAMP (Language model Authored, Manually Polished), a corpus of $1282 < AI - generated$ , Expert - Edited > pairs with implicit preference (edited > original_draft) to improve AI writing (see Table 4 in Appendix A.1). Additionally, each paragraph pair includes normalized scores (1-10) reflecting writing quality before and after editing.", + "bbox": [ + 169, + 571, + 826, + 797 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our work builds on LAMP data to train Writing Quality Reward Models (WQRM) across multiple model families using pairwise and scalar rewards. To evaluate WQRM, we introduce the Writing Quality Benchmark (WQ), consolidating five datasets that contrast Human-Human, Human-AI, and AI-AI writing pairs reflecting real world applications. In addition to standard reward models we also implement a teacher-student knowledge distillation approach, fine-tuning open-weight models (students) on LAMP with silver rationales generated from", + "bbox": [ + 169, + 801, + 826, + 888 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "1Forcing annotators to choose between two undesirable outputs doesn't improve alignment. In the current design of RLHF, annotators are not allowed to pick neither", + "bbox": [ + 169, + 896, + 823, + 925 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "stronger LLMs (teachers) (Section 3). This framework enhances faithfulness and robustness by transferring reasoning abilities from powerful teachers to efficient students. Empirical results show our LAMP-trained reward models outperform proprietary LLMs like GPT-4o, o1 (OpenAI, 2024), open-weight models like DeepSeek-R1 (Guo et al., 2025), and competitive Reward-Bench models like Skywork-Reward (Liu et al., 2024).", + "bbox": [ + 169, + 103, + 826, + 174 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Next, we use expert edit interaction traces from LAMP data (Figure 6) to train a Chain-of-Thought editing model that identifies problematic spans, suggests edits, and combines them into a paragraph with improved writing (Section 5). Following recent work that leverages additional inference-time computation to improve LLM performance (Hosseini et al., 2024; Lightman et al., 2023; Wu et al., 2024; Ji et al., 2025; Snell et al., 2024), we employ best-of-N-sampling (Chow et al., 2024; Cobbe et al., 2021; Lightman et al., 2023) to select the best candidate from multiple edited paragraphs based on our reward model. Expert evaluation on LLM-generated responses based on writing instructions across fiction, nonfiction, and marketing confirms the correlation between expert judgment and our reward models. Experts and our best WQRM align in terms of preferences $66\\%$ overall, and $72.2\\%$ when the reward gap is larger than 1 point. Our results represent progress toward aligning LLMs with expert humans on subjective writing tasks, one of the most common use cases of AI (Handa et al.). As summarized in Figure 1:", + "bbox": [ + 169, + 180, + 826, + 362 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We introduce the Writing Quality Benchmark (WQ) by consolidating five writing preference datasets and show how state-of-the-art LLMs and reward models perform close to random chance on writing quality assessment,", + "- We leverage implicit preference from edits to train competitive open weight reward models (WQRM) of different sizes for judging writing quality. Our reward models achieve top performance on the WQ benchmark,", + "- We use interaction traces from fine-grained expert edits to train an editing pipeline that improves writing quality. We further leverage additional test-time compute to generate and rank multiple edited paragraphs, allowing us to select higher-quality outputs from an initial draft based on our reward model. Evaluation with professionals confirms that the reward aligns with expert judgments and opens up possible avenues for improving alignment in AI-assisted writing.[2]" + ], + "bbox": [ + 171, + 364, + 826, + 540 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 171, + 559, + 328, + 574 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Widespread adoption and Limitations of AI assistance in writing Large language models have rapidly transformed written communications across multiple sectors, with approximately $10 - 24\\%$ of text in consumer complaints, corporate communications, job postings, and UN press releases being LLM-assisted by late 2024 (Liang et al., 2025). These adoption rates have stabilized after an initial surge following ChatGPT's release. Outside of technical writing LLMs are also being used for scientific (Liang et al., 2024; Gero et al., 2022) as well as creative writing (Chakrabarty et al., 2024c; Ippolito et al., 2022; Yuan et al., 2022; Mirowski et al., 2023; 2024). Aligning language models with human preferences (Ouyang et al., 2022) has enabled their integration into writing tools such as Google's WorkSpace Labs, Grammarly, and Sudowrite. Despite productivity gains in using AI for writing, several limitations remain with AI-generated text. Prior work (Chakrabarty et al., 2024a;c; Ippolito et al., 2022; Mirowski et al., 2023; Marco et al., 2024) has shown how AI-generated text is often rife with clichés, lacks nuance, subtext, and rhetorical complexity. Through use of syntactic templates Shaib et al. (2024) show the repetitiveness of AI-generated text in comparison to human-written references. More recently Russell et al. (2025) show that AI-generated text is most easily detectable by its characteristic vocabulary, followed by formulaic writing structures and lack of originality. Neither paraphrasing nor humanization effectively removes all of these signatures.", + "bbox": [ + 169, + 590, + 826, + 840 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Human-AI Alignment in Writing Recent work from Lee et al. (2024) highlights how LLMs have transformed the processes behind writing, establishing new criteria for future AI writing assistants. Anderson et al. (2024) and Laban et al. (2023) discovered that Large Language Models assisted users in generating more detailed ideas. However, these studies also", + "bbox": [ + 169, + 843, + 828, + 902 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "2Our code, data and models are available at https://github.com/salesforce/creativity_eval/", + "bbox": [ + 189, + 909, + 818, + 922 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "found that the outputs were less semantically distinct across different users (Padmakumar & He, 2023), and participants reported feeling diminished responsibility for the ideas they produced. In a similar vein Li et al. (2024) explores people's attitudes toward AI writing assistants, finding that while many value and prefer AI assistance for creative tasks and productivity gains, this comes with potential drawbacks in reduced accountability and diversity in writing outcomes. Liu et al. (2025) introduce eRevise+RF, an automated writing evaluation system designed to assess student essay revisions and offer formative feedback. The system was deployed with 406 students across three schools, demonstrating effectiveness in evaluating evidence usage, identifying revisions, and determining revision success. Prior work from Pan et al. (2024) shows language models can enhance outputs through feedback. However, iterative self-refinement using another language model as evaluator may lead to reward hacking, where models exploit evaluator weaknesses. Chakrabarty et al. (2024b) shows how LLMs across different model families share common writing idiosyncrasies and how automatically editing these idiosyncrasies improves alignment, based on a behavioral study with 12 writers.", + "bbox": [ + 169, + 103, + 826, + 311 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Unlike prior work that has focused either on detecting/addressing issues in AI writing our work introduces Writing Quality Reward Models (WQRMs) trained on expert edits that outperform state-of-the-art LLMs on a Writing Quality benchmark.", + "bbox": [ + 169, + 318, + 823, + 362 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Writing Quality Reward Models", + "text_level": 1, + "bbox": [ + 171, + 380, + 493, + 398 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg", + "image_caption": [ + "Figure 2: Transforming LAMP annotations into classification and regression data points used during fine-tuning of WQRM models." + ], + "image_footnote": [], + "bbox": [ + 173, + 400, + 496, + 554 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We rely on the LAMP (Language model Authored, Manually Polished) corpus from Chakrabarty et al. (2024b) to train reward models. As illustrated in Figure 2, each sample in LAMP consists of a writing instruction and two paragraphs that match this instruction. The paragraphs in LAMP range from 150 to 400 words, and span across fiction and non-fiction. Table 4 in Appendix A.1 shows a sample from LAMP, highlighting the edits implemented by an expert to improve writing quality. We use three methods to transform LAMP samples into training and validation data points for our models: pairwise (P), scalar (R), and combined (PR). With the P method, each data point presents two paragraphs as input (1 and 2) and requires a binary classification output", + "bbox": [ + 509, + 381, + 826, + 632 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "indicating which paragraph has higher writing quality (i.e., the output is 1 or 2). Each LAMP sample is duplicated into two P data points by considering both paragraph orders (AI-generated, Expert-Edited $\\rightarrow$ 2) and (Expert-Edited, AI-generated $\\rightarrow$ 1). With the R method, each data point takes a single paragraph as input and outputs a regression value predicting the quality score of that paragraph. Since each LAMP sample contains two paragraphs (before and after edit), it generates two R data points. The PR method combines both approaches, yielding four data points per LAMP sample (two from P and two from R). There are a total of 1,282 samples in LAMP, and we follow the author's split divisions of 1,000 training, 67 validation, and 215 test samples. Applying the data transformation described above, the P, R, and PR variants of the training data we obtain consist of 2,000, 2,000, and 4,000 training data points, respectively. For our experiments, we trained both generative LLMs (Llama3.1 (Dubey et al., 2024)) and encoder-only models (ModernBert (Warner et al., 2024)).", + "bbox": [ + 169, + 631, + 826, + 813 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Encoder-Only WQRM We follow the standard approach introduced in the original BERT paper (Devlin et al., 2019) to add and finetune two task-specific heads to a ModernBERT-Large model (Warner et al., 2024). The input data points contain either one paragraph (for R data points) or two paragraphs (for P data points), which are encoded jointly with a pre-defined separator token when needed. For each paragraph, we compute a \"paragraph vector\" by pooling the last layer's activations across all tokens in that paragraph. These paragraph vectors serve as input to either a regression (R) or classification (P) head. The", + "bbox": [ + 169, + 825, + 826, + 925 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "regression head transforms the vector through a learned linear projection from the model's inner dimension to a scalar, followed by a scaled sigmoid to align with the 1-10 score range. The classification head is aparametric, using a cosine similarity operation between the two paragraph vectors. We use mean-squared error loss for R data points and cross entropy for P data points. Following convention for encoder-only models, we finetune the entire model's weights (Devlin et al., 2019). We selected ModernBERT-Large, the largest available model, for our experiments. We fine-tuned three variants: MBERT-WQRM-P, MBERT-WQRM-R, and MBERT-WQRM-PR, each on their corresponding data variants. Hyperparameters, including learning rate and number of epochs, were optimized by minimizing validation loss. PR models can be used in either P- or R-mode at test-time. Initial evaluation indicated that PR models achieve higher performance in R-mode, and as such we used all PR models in R-mode by default during evaluation.", + "bbox": [ + 169, + 103, + 826, + 271 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Generative WQRM We finetune generative transformer architectures by converting classification and regression tasks to sequence-to-sequence problems using JSON output format (Table 5). We employ QLora (Dettmers et al., 2023) parameter-efficient tuning with FSDP (Zhao et al., 2023) and cross-entropy loss. Generative methods can produce natural-language rationales alongside predictions for interpretability. Wiegrefe et al. (2020) demonstrated label-rationale association as essential for response faithfulness, while (Ludan et al., 2023; Hase & Bansal, 2021) argued for incorporating explanations in model input/output to improve robustness against spurious cues. Since LAMP lacks expert rationales, we augment it with LLM-generated silver rationales. We collected five examples from professional writers showing either paragraph strength contrasts (P-style) or holistic critiques/praise (R-style), instructing them to cite specific excerpts. These expert rationales serve as demonstrations for Claude3.5 Sonnet3 to generate rationales (examples in Table 6, Appendix A.3).", + "bbox": [ + 169, + 273, + 826, + 444 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The rationale augmentation is then used in two variants, either providing the rationales on the input $(\\mathrm{IR}\\rightarrow \\mathrm{O})$ , or requiring the generative model to produce the rationale as part of its output $(\\mathrm{I}\\rightarrow \\mathrm{RO})$ . We note that rationales are not available at test-time, and are only included during training as an augmentation technique. We finetune a total of seven variants, all based on LLama 3.1 70b model: Llama-WQRM-P, Llama-WQRM-R, Llama-WQRM-PR, Llama-WQRM-P-IR $\\rightarrow \\mathrm{O}$ and Llama-WQRM-P-I $\\rightarrow \\mathrm{RO}$ , Llama-WQRM-PR-IR $\\rightarrow \\mathrm{O}$ and Llama-WQRM-PR-I $\\rightarrow \\mathrm{RO}$ , based on different versions of the training data, and tune hyperparameters by minimizing validation loss.", + "bbox": [ + 169, + 448, + 826, + 561 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 The Writing Quality Benchmark", + "text_level": 1, + "bbox": [ + 169, + 571, + 493, + 589 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/829672b917337c9a73eb40af91f6ec69e742a115a2d9e839b5749586c9021915.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetPair OriginAnnotatorLenN
Art or Artifice\\( \\text{或或}/\\text{或或} \\)Expert1.5-3k144
LAMP-test\\( \\text{或或}/\\text{或或} \\)Expert200-4001,206
Style Mimic\\( \\text{或或} \\)Expert200-400300
Synth. Mirror\\( \\text{或或} \\)Expert200-4001,120
LM Arena\\( \\text{或或} \\)Crowd200-2.5k1,959
", + "bbox": [ + 173, + 604, + 498, + 686 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1: Writing Quality benchmark composition. Pair Origin: evaluated pairs are AI-generated (♂) or human-written (♀); Len: #words in evaluated responses; N: total evaluation pairs contributed to the benchmark.", + "bbox": [ + 169, + 696, + 503, + 772 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We create the first benchmark centered on the task of writing quality assessment by collecting five relevant datasets and standardizing their data formats into a pairwise preference task. The task in the benchmark consists of a writing instruction and two writing responses, with a binary label indicating which of the two responses has higher writing quality. Table 1 lists the five datasets we selected for the benchmark, along with key properties of each dataset that lead to a comprehensive benchmark for writing quality. We include three datasets that involve AI-AI comparisons (Art or Artifice (Chakrabarty et al., 2024a), LAMP-test", + "bbox": [ + 509, + 587, + 826, + 796 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "(Chakrabarty et al., 2024b), and LM Arena (Zheng et al., 2023)), three that involve AI-Human comparisons (Art or Artifice, LAMP-test, and Synthetic Mirror), and one that involves Human-Human comparisons (Style Mimic) (Anonymous, 2025). This diversity ensures that models that perform well on the benchmark can judge writing quality regardless of whether the response was LLM generated or human-written.", + "bbox": [ + 169, + 795, + 823, + 867 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To assess writing quality prior work has argued for evaluation by professionals (ones with writing experience). Nevertheless, some writing quality preference datasets are based on", + "bbox": [ + 169, + 872, + 823, + 902 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "3Considered a top-performing model for writing tasks at the time of experiments.", + "bbox": [ + 189, + 909, + 723, + 925 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "crowd-sourced judgments. We include four datasets based on expert judgments and one dataset based on crowd-sourced annotation (LM Arena) to represent both perspectives in the benchmark. Finally, we selected two datasets with long responses (Art or Artifice, LM Arena) and three with shorter responses ranging from 200-400 words, ensuring that models that perform well on the benchmark are capable of judging writing quality irrespective of length. Appendix A.4 details the procedure we followed to extract and standardize each dataset. Appendix A.5 provides an analysis we conducted on the relative difficulty of each dataset in the benchmark, finding that the five selected datasets provide a breadth of coverage in terms of difficulty.", + "bbox": [ + 169, + 103, + 826, + 229 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/3d68dc97a876f3af5134e0b6f154411a088aeaaa45f20a7956ed0e3b8a5c7524.jpg", + "table_caption": [ + "Writing Quality Benchmark" + ], + "table_footnote": [], + "table_body": "
ModelSynthetic MirrorArt or ArtificeLAMPStyle MimicLM ArenaOverall (↑) All
MIRRMIRR/MIRRMIRR/MIRRMIRRMIRRAll
MBERT-WQRM-PR99.880.672.667.351.074.3
MBERT-WQRM-R100.080.676.159.351.073.4
MBERT-WQRM-P99.554.271.267.046.867.7
Llama3.1 - P - IR → O100.080.574.943.052.870.2
Llama3.1 - PR - IR → O99.669.473.754.350.169.4
Llama3.1 - PR - I → OR99.176.371.742.655.268.9
Llama3.1 - P - I → OR99.975.174.138.649.167.3
Llama3.1 (70b) - PR94.852.071.340.644.360.6
Llama3.1 (70b) - P88.145.171.735.647.757.6
Llama3.1 (70b) - R44.850.040.350.054.347.9
Pangram100.072.656.547.348.465.0
O367.785.441.467.559.664.3
Skywork-8B-v0.290.368.154.234.055.860.5
GPT-4o (5FS)39.568.840.367.355.554.3
O125.867.439.868.756.751.7
DeepSeek-r131.554.939.247.357.046.0
GPT-4o7.556.237.847.755.440.9
", + "bbox": [ + 236, + 244, + 756, + 503 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: Writing Quality Benchmark results. We evaluate zero-shot and few-shot LLMs, generic reward models, AI-detection models, and our fine-tuned models.", + "bbox": [ + 169, + 513, + 826, + 546 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Experimental Results on WQ", + "text_level": 1, + "bbox": [ + 171, + 556, + 429, + 573 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Our experiments on the WQ benchmark include four classes of models. First, Zero-Shot (ZS) and Few-Shot (FS) methods with top-performing instruction-tuned LLMs. We included both non-reasoning (GPT-4o) and reasoning models (Deepseek-R1, O1). Second, a top-performing generic reward model - SkyWork-8b-v0.2 - based on results on the RewardBench leaderboard (Lambert et al., 2024). Third, we include the Pangram AI-detector $^4$ , accessed through API. Finally, the trained WQRM models in generative and encoder-only settings as described in Section 3. Models that can produce pairwise judgments (such as SkyWork or WQRM-P models) were used as is, but for models that produce scalar rewards (WQRM-R, Pangram), a scalar reward was computed for each response, and inequality was applied to emit a pairwise preference. Scalar rewards can theoretically lead to a tie (a score difference of less than an epsilon like 0.001), but we observe few of these in practice (less than $0.1\\%$ of pairs), and resolve those randomly.", + "bbox": [ + 169, + 583, + 826, + 753 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Experimental results are summarized in Table 2. First, we find that all the LLMs used in zero-shot settings perform below or a few percentage points above a random baseline of $50\\%$ . The performance is particularly low on portions of WQ that involve AI-human preference pairs. This confirms prior findings that LLMs used in LLM-as-a-judge settings tend to prefer AI-generation over human-writing (Panickssery et al., 2024). The O1 and R1 reasoning models do not significantly outperform their non-reasoning counterparts, indicating that out-of-the-box COT-style reasoning, useful for math or coding tasks doesn't improve writing quality assessment. O3 shows improvement on Synthetic Mirror and Art or Artifice showing some promise. Finally, adding five few-shot examples to GPT-4o does help improve performance from 40.9 to 54.3, however further experiments with additional", + "bbox": [ + 169, + 758, + 826, + 902 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "4https://www.pangram.com/dashboard?type $\\equiv$ text", + "bbox": [ + 189, + 909, + 517, + 922 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "in-context examples did not lead to further gains, confirming that few-shot examples in the instruction are not sufficient to achieve strong performance on WQ.", + "bbox": [ + 169, + 103, + 823, + 133 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The generic reward model – Skywork-8b-v0.2 – achieves an overall accuracy of 60.5, with strong performance on Synthetic Mirror and Art or Artifice. Though better than random, the overall performance is much lower than the $93\\%$ performance the model achieves on RewardBench, indicating that reward models geared for instruction-following evaluation are not effective at writing quality assessment out-of-the-box.", + "bbox": [ + 169, + 138, + 826, + 209 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The Pangram AI detection system achieves a total performance of $65.0\\%$ , the top performance for untrained models. Pangram achieves near-perfect performance on Synthetic Mirror and the AI-Human pairs of Art or Artifice. On samples that do not involve distinguishing between AI and human text, Pangram achieves near-random performance. In other words, AI-detection tools only correlate with writing quality assessment when an AI-generated text is judged to be worse than human-written text.", + "bbox": [ + 169, + 215, + 826, + 301 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Finally, the trained WQRM models achieve top-performance on the benchmark. The Llama-based models achieve their strongest performance in the $\\mathrm{IR} \\rightarrow \\mathrm{O}$ settings, confirming that augmenting the training data with rationales is beneficial, with models that can generate rationales alongside their prediction. The ModernBERT-based models achieve the highest overall accuracy of $74.3\\%$ , with the PR variant outperforming the P and R models, indicating that pairwise and reward-based training can be complementary. While its surprising to see a smaller model outperform Llama3.1-70B it could be due to PEFT or the way the loss function is optimized. Future work can focus on bridging this gap.", + "bbox": [ + 169, + 306, + 826, + 419 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We observe that generative WQRM models perform best in P-mode, whereas encoder models perform best in R-mode. We emit a hypothesis for this reversal of relationship, related to the choice of loss. The generative models (Llama) are trained with a sequence-to-sequence loss, whereas the encoder-only models (MBert) are trained with custom losses (pairwise classification for P, mean-squared error for R). In other words, LLama training on the reward-based data is more similar to 10-way classification than actual score regression, whereas the MBert training makes better use of the reward-based data. This leads the MBERT-R models to outperform MBert-P models, whereas the reverse is true for the LLama models, as they are not able to properly take advantage of the R-based data.", + "bbox": [ + 169, + 424, + 828, + 551 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Looking at performance on individual datasets, Synthetic Mirror is the the easiest dataset, with eight models achieving near-perfect performance. Some models achieve $80\\%+$ performance on Art or Artifice, indicating that long-context evaluation is challenging but achievable. Style Mimic and LM Arena are the most challenging in terms of accuracy. Style Mimic is likely challenging as it is the only dataset that involves comparisons that do not involve AI-generated text, but two relatively high-quality human-written candidates. LM Arena is challenging to all systems, with top performance at $57\\%$ by Deepseek-R1. This low performance could be due to the crowd-sourced nature of LM Arena, with the dataset representing much broader and potentially noisier judgments. Though our trained WQRM models outperform baselines by almost $10\\%+$ overall, there remains wide room for improvement: writing quality assessment remains an open challenge to the community. Additional analysis in upcoming Sections refers to the top-performing model - MBERT-WQRM-PR - simply as WQRM.", + "bbox": [ + 169, + 556, + 828, + 739 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5 Editing Pipeline with Test-Time Compute", + "text_level": 1, + "bbox": [ + 171, + 748, + 578, + 767 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To better understand the practical value of the WQRM model, we integrate it into a text-editing pipeline to produce LLM-generated candidates of higher-quality according to WQRM scores. We first introduce the editing pipeline and candidate generation procedure, and then describe the large-scale preference annotation we conducted with professional writers to validate WQRM as part of an editing pipeline.", + "bbox": [ + 169, + 780, + 826, + 851 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1 Generating edits via Supervised Finetuning", + "text_level": 1, + "bbox": [ + 171, + 854, + 540, + 871 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Prior work from Chakrabarty et al. (2024b) shows experimentally that LLMs' text idiosyncrasies (cliches, redundancy, lack of subtext, etc.) can be mitigated through self-editing in an in-context setup. Borrowing motivation from them we teach LLMs how to improve", + "bbox": [ + 169, + 881, + 828, + 926 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 491, + 946, + 504, + 959 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "their response via edits. Figure 6 illustrates the three components of the editing pipeline. Given a first draft response to an instruction from any given LLM, the first step consists of identifying and listing idiosyncrasies: spans in the first draft that can be rephrased to improve overall writing quality. For each identified idiosyncrasy, a second stage consists in rewriting the idiosyncrasy. This is framed as an executable edit (Laban et al., 2023), where each edit consists of replacing an original string in a draft with an improved version. The third step simply executes all edits (by applying a series of string replace operations) to obtain the final edited draft. While Chakrabarty et al. (2024b) implemented this through prompt-chaining (Wu et al., 2022) with few-shot examples, we improved efficiency by supervised fine-tuning of GPT-4o and Llama3.1 70B based on the entire LAMP training set. The training input consists of the first draft alongside the entire edit interaction trace (detect, rewrite, execute) in a step-by-step chain of thought prompt, and the output is the edited paragraph. See Appendix A.7 for an example COT prompt.", + "bbox": [ + 169, + 103, + 826, + 287 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2 Selecting edited response by leveraging Test-Time Compute", + "text_level": 1, + "bbox": [ + 169, + 290, + 661, + 306 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Recent work from Snell et al. (2024) shows that test-time compute can be scaled optimally by using a reward model to search over the space of solutions. This approach typically involves generating multiple candidate responses and using a verifier to select an optimal response (Cobbe et al., 2021). The most popular technique to increase test-time compute is Best-of-N sampling also known as Rejection Sampling, in which N candidates are generated independently. The reward model is then used to score each candidate, and the top-scoring candidate is selected. While test-time scaling is effective for reasoning tasks, our work aims to measure whether it is a practical strategy to improve human-AI alignment in subjective tasks such as writing. Next we describe the validation study with experts to measure how well calibrated our WQRMs are to human judgment and whether additional test-time computation leads to meaningful improvements in AI writing quality.", + "bbox": [ + 169, + 306, + 826, + 464 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 How well calibrated are our reward models?", + "text_level": 1, + "bbox": [ + 169, + 470, + 606, + 486 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We generated 100 draft responses (50 GPT4-o, 50 Llama3.1 70B) based on 90 writing instructions spanning 3 domains: literary fiction, non-fiction, and product marketing. For literary fiction and non-fiction we create the instructions through instruction back-translation (Li et al., 2023) conditioned on expert-written paragraphs in Anonymous (2025) and news articles in the data from Russell et al. (2025). Marketing writing instructions were based on products recommended in WireCutter articles across the Home, Kitchen and Tech sections. The right portion of Figure 1 summarizes the process we follow to leverage test-time compute. Specifically, we obtain a first draft from a LLM (GPT4o or Llama3.1 70B) followed by drawing $N = 20$ candidate edited responses from the respective SFT model (Section 5.1)6, and score each candidate with the WQRM model. We filter out any candidate that scores lower than the first drafts, and then form response triplets by selecting the first draft, a randomly-selected edited response (random edit), and the Best-of-N candidate response according to WQRM (Best Edit) (See example triplet in Table 9). We recruited 9 professional writers through mailing lists from top MFA programs in the US. They were asked to rank three responses based on its overall quality (See Figure 8 for interface). Each response triplet were annotated by three experts, which we aggregated into a majority rank. Participants completed annotation in batches of 10 triplets at a time, and were paid $100 per batch.", + "bbox": [ + 169, + 491, + 826, + 734 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6.1 Study Findings", + "text_level": 1, + "bbox": [ + 171, + 738, + 328, + 753 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Figure 3 summarizes findings from the expert annotation. In Figure 3a, we plot the distribution of rankings across all triplets. Best Edit candidates were most preferred overall with an average rank of 1.58, followed by random edit (2.09) and first draft (2.26). The breakdown of rankings across domains (fiction, non-fiction, marketing) or LLM (GPT-4o vs. Llama 3.1) is presented in Appendix A.8. In short, Best Edit achieves the top rank in all conditions, confirming the generalization of WQRM scores across conditions.", + "bbox": [ + 169, + 763, + 826, + 849 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "If the reward model is well-calibrated, the WQRM score gap between responses should indicate their qualitative difference. For example, responses scoring 4 and 6 should have a larger", + "bbox": [ + 169, + 854, + 826, + 886 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "5https://www.nytimes.com/wirecutter/", + "bbox": [ + 189, + 895, + 464, + 910 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "6If first draft is from GPT4o we use GPT4o SFT model", + "bbox": [ + 192, + 910, + 544, + 922 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg", + "image_caption": [ + "(a) Expert Ranking Distribution" + ], + "image_footnote": [], + "bbox": [ + 181, + 104, + 442, + 239 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg", + "image_caption": [ + "(b) Gap vs. Agreement" + ], + "image_footnote": [], + "bbox": [ + 454, + 103, + 632, + 239 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg", + "image_caption": [ + "(c) Sensitivity Analysis" + ], + "image_footnote": [], + "bbox": [ + 643, + 104, + 820, + 239 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg", + "image_caption": [ + "Figure 3: Results and analysis of WQRM based: (a) distribution of preference based on 300 expert triplet rankings, (b) calibration between gap in WQRM scores and matching expert preference, and (c) applying experts edits gradually to a draft leads to gradual reward gains.", + "(a) Less content detail in writing prompt", + "Figure 4: Writing quality analysis of human-written and LLM-generated texts according to WQRM on (a) less and (b) more content detail in the writing prompt. Prompts with less content detail average 30 words, whereas prompts with more content detail average 180." + ], + "image_footnote": [], + "bbox": [ + 178, + 345, + 495, + 556 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg", + "image_caption": [ + "(b) More content detail in writing prompt" + ], + "image_footnote": [], + "bbox": [ + 526, + 347, + 821, + 556 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "quality gap than those scoring 4 and 4.5. To inspect WQRM calibration, we computed the WQRM gap between all annotated response pairs and plotted it against expert annotation agreement. As shown in Figure 3b, WQRM gap positively correlates with expert agreement: when responses differ by $\\leq 0.5$ points, individual experts prefer the higher-scoring response only $55\\%$ of the time. When the gap exceeds 3.0, this increases to $80\\%$ . Agreement with majority rank based on three expert annotations (green line) shows even stronger positive correlation. In short, we find evidence that WQRM is well-calibrated: a wider gap in scores between two responses is evidence that an expert (or group of experts) would be more likely to prefer the higher-scoring response over the lower-scoring response.", + "bbox": [ + 169, + 652, + 826, + 780 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Besides calibration, we analyze the sensitivity of the WQRM model to minor edits and their impact on writing quality. The LAMP dataset consists of drafts that are edited by expert writers to improve writing, with samples comprising of eight edits per passage on average. We implement a gradual version of the LAMP-test set, where each expert edit is reversed, and we execute them one at a time, computing the WQRM score at each intermediate step. Results from the gradual LAMP-test are summarized in Figure 3c: each time an additional edit is implemented, the median WQRM score increases by 0.2, even though WQRM was not trained on intermediate responses and only saw samples where no edit or all edits have been applied. In summary, we find evidence that minor edits to a response will lead to small but significant changes in WQRM scores, indicative of a fine sensitivity of the reward model.", + "bbox": [ + 169, + 785, + 826, + 936 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 946, + 503, + 958 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "7 How does content affect writing quality?", + "text_level": 1, + "bbox": [ + 169, + 101, + 571, + 119 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Effectively judging writing quality impacts both understanding and improving LLM writing. Writing quality is however closely tied to content. Its known that LLMs struggle with novel ideas (content planning), making their writing appear trite. Even with detailed original content, they struggle to maintain good writing standards (avoiding clichés, revealing subtext, and introducing purple prose). To understand how content affects writing quality, we analyzed writing from several LLMs with and without detailed content. We used 50 writing instructions from Style Mimic data, creating two variants: a 30-word prompt with less detail (e.g., \"A family Christmas unfolds through emotional reflections on a father's new family, a daughter's excuse to stay behind, and the complex dynamics of grief and blended identities.\") and a 150-200 word detailed prompt (Table 10 in Appendix). Style Mimic provides an original excerpt from an award-winning author and an MFA student's attempt to mimic that style for each prompt. Each sample includes the detailed content used for 4b.", + "bbox": [ + 169, + 132, + 826, + 313 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, we retrained a better calibrated reward model with few fully human written high quality text (See Appendix A.11 for more details). Figure 4a shows writing quality scores from the WQRM model when prompts lack detailed content. Award-winning authors achieve a median score of 8.9, while LLMs score 4.8-6.6 with much higher variance. Despite WQRM being trained only on AI-generated paragraphs edited by MFA students and relatively fewer human written samples, it scored 50 author-written texts higher than all LLMs, demonstrating model generalization. GPT4.5, though considered the best writing LLM, showed no quality advantage. The significant gap between awardwinning authors and LLMs shows that in the absence of original good-quality content, all LLMs are poor writers.", + "bbox": [ + 169, + 320, + 826, + 474 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Figure 4b shows the writing quality of several LLMs leveraging the new WQRM model when detailed content is provided in the writing prompt. As a matter of fact the content detail is often $0.5\\mathrm{x}$ to $0.75\\mathrm{x}$ times the word count of the paragraph to be written/generated. Results with the detailed prompts provide additional insights. Though the variance remains high for all models, the more recent models (GPT-4.5, Claude 3.7-Sonnet, Gemini-2.5-pro) achieve improved writing quality given the more detailed prompts, achieving median scores of around 7.0. This should not be surprising as the amount of details provided in the writing prompt reduces the burden for originality and novelty from the LLM. What is particularly impressive here is paragraphs written by MFA students based on the same detailed content were rated significantly higher than all LLMs with a median of 8.6. The gap between award-winning authors and MFA students is narrow here, although the distribution from MFA students shows higher variance. Our results highlight that even when provided with very detailed original content, LLMs are far behind trained writers.", + "bbox": [ + 169, + 479, + 826, + 662 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In summary, the analysis reveals that current LLMs are not yet capable of reliably generating high-quality creative writing at the level of an MFA student or award-winning author, especially when not spooned with original content. When provided with enough content detail in the prompt, the latest models show promise but still remain unreliable.", + "bbox": [ + 169, + 667, + 828, + 726 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "8 Conclusion", + "text_level": 1, + "bbox": [ + 169, + 744, + 308, + 758 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In this work, we introduced the Writing Quality benchmark (WQ) and Writing Quality Reward Models (WQRM) to address the critical challenge of evaluating and improving the quality of AI-generated text. Our models trained on implicit preference via edits significantly outperform existing approaches, achieving $74\\%$ accuracy on the WQ benchmark and demonstrating strong generalization across diverse writing contexts, as confirmed by a validation study involving 9 professional writers. Future work can address alternative test time computation such as long chains-of-thought (CoTs) enabling strategies like backtracking and correction of idiosyncrasies for improving writing. While our approach improves AI generated text by reducing idiosyncrasies, it is no where near expert quality writing. However, we hope that our contributions can serve as a catalyst for further research in writing quality assessment and the development of AI writing systems that are more aligned with human preferences.", + "bbox": [ + 169, + 757, + 828, + 925 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 102, + 274, + 117 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Barrett R Anderson, Josh Hemant Shah, and Max Kreminski. Homogenization effects of large language models on human creative ideation. In Proceedings of the 16th Conference on Creativity & Cognition, pp. 413-425, 2024.", + "Anonymous. Literary voice reproduction study mfa writers vs. llms in authorial style. In Under Submission, 2025.", + "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.", + "Deborah Brandt. The rise of writing: Redefining mass literacy. Cambridge University Press, 2014.", + "Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023.", + "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. Art or artifice? large language models and the false promise of creativity. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024a. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642731. URL https://doi.org/10.1145/3613904.3642731.", + "Tuhin Chakrabarty, Philippe Laban, and Chien-Sheng Wu. Can ai writing be salvaged? mitigating idiosyncrasies and improving human-ai alignment in the writing process through edits. arXiv preprint arXiv:2409.14509, 2024b.", + "Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, and Smaranda Muresan. Creativity support in the age of large language models: An empirical study involving professional writers. In Proceedings of the 16th Conference on Creativity & Cognition, C & C '24, pp. 132-155, New York, NY, USA, 2024c. Association for Computing Machinery. ISBN 9798400704857. doi: 10.1145/3635636.3656201. URL https://doi.org/10.1145/3635636.3656201.", + "Yinlam Chow, Guy Tennenholtz, Izzeddin Gur, Vincent Zhuang, Bo Dai, Sridhar Thiagarajan, Craig Boutilier, Rishabh Agarwal, Aviral Kumar, and Aleksandra Faust. Inference-aware fine-tuning for best-of-n sampling in large language models. arXiv preprint arXiv:2412.15287, 2024.", + "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017.", + "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168, 9, 2021.", + "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, 36:10088-10115, 2023.", + "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423/." + ], + "bbox": [ + 174, + 127, + 825, + 922 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.", + "Bradley Emi and Max Spero. Technical report on the pangram ai-generated text classifier. arXiv preprint arXiv:2402.14873, 2024.", + "Yang Gao, Dana Alon, and Donald Metzler. Impact of preference noise on the alignment performance of generative language models. arXiv preprint arXiv:2404.09824, 2024.", + "Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. Sparks: Inspiration for science writing using language models. In Proceedings of the 2022 ACM Designing Interactive Systems Conference, pp. 1002-1019, 2022.", + "Sian Gooding, Lucia Lopez-Rivilla, and Edward Grefenstette. Writing as a testbed for open ended agents, 2025. URL https://arxiv.org/abs/2503.19711.", + "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.", + "Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, et al. Which economic tasks are performed with ai? evidence from millions of claude conversations.", + "Peter Hase and Mohit Bansal. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201, 2021.", + "John R Hayes, Linda Flower, Karen A Schriver, James Stratman, Linda Carey, et al. Cognitive processes in revision. Advances in applied psycholinguistics, 2:176-240, 1987.", + "John Herrman. Is that ai? or does it just suck? New York Magazine, 2024a. URL https://nymag.com/intelligencer/article/is-that-ai-or-does-it-just-suck.html.", + "John Herrman. The internet's ai slop problem is only going to get worse. New York Magazine - Intelligencer, 2024b. URL https://nymag.com/intelligencer/article/ai-generated-content-online-slop-spam.html. Accessed: 2025-03-06.", + "Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457, 2024.", + "Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. Creative writing with an ai-powered writing assistant: Perspectives from professional writers. arXiv preprint arXiv:2211.05030, 2022.", + "Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Jia Xu, Linjian Mo, and Min Zhang. Test-time computing: from system-1 thinking to system-2 thinking. arXiv preprint arXiv:2501.02497, 2025.", + "Kate Knibbs. Confessions of an ai clickbait kingpin. Wired, 2024a. URL https://www.wired.com/story/confessions-of-an-ai-clickbait-kingpin/. Accessed: 2025-03-07.", + "Kate Knibbs. Scammy ai-generated books are flooding amazon. Wired, 2024b. URL https:// www.wired.com/story/scammy-ai-generated-books-flooding-amazon/. Accessed: 2025- 03-07.", + "Kate Knibbs. Ai slop is flooding medium. Wired, 2024c. URL https://www.wired.com/story/ai-generated-medium-posts-content-moderation/. Accessed: 2025-03-06.", + "Kate Knibbs. Some of substack's biggest newsletters rely on ai writing tools. Wired, 2024d. URL https://www.wired.com/story/substacks-writers-use-ai-chatgpt/. Accessed: 2025-03-07." + ], + "bbox": [ + 171, + 102, + 826, + 922 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Dmitry Kobak, Rita González-Márquez, Emőke-Ágnes Horvát, and Jan Lause. Delving into chatgpt usage in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016, 2024.", + "Philippe Laban, Jesse Vig, Marti A Hearst, Caiming Xiong, and Chien-Sheng Wu. Beyond the chat: Executable and verifiable text-editing with llms. arXiv preprint arXiv:2309.15337, 2023.", + "Nathan Lambert and Roberto Calandra. The alignment ceiling: Objective mismatch in reinforcement learning from human feedback. arXiv preprint arXiv:2311.00168, 2023.", + "Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, et al. Rewardbench: Evaluating reward models for language modeling. arXiv preprint arXiv:2403.13787, 2024.", + "Timothy Laquintano and Annette Vee. Ai and the everyday writer. PMLA, 139(3):527-532, 2024.", + "Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. Rlaif vs. rlhf: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.", + "Jinsook Lee, A. J. Alvero, Thorsten Joachims, and René F. Kizilcec. Poor alignment and steerability of large language models: Evidence from college admission essays. 2025. URL https://apisemantic scholar.org/CorpusID:277321621.", + "Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A Alghamdi, et al. A design space for intelligent and interactive writing assistants. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-35, 2024.", + "Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023.", + "Zhuoyan Li, Chen Liang, Jing Peng, and Ming Yin. The value, benefits, and concerns of generative ai-powered assistance in writing. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642625. URL https://doi.org/10.1145/3613904.3642625.", + "Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, et al. Mapping the increasing use of llms in scientific papers. arXiv preprint arXiv:2404.01268, 2024.", + "Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. The widespread adoption of large language model-assisted writing across society. arXiv preprint arXiv:2502.09747, 2025.", + "Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023.", + "Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451, 2024.", + "Zhexiong Liu, Diane Litman, Elaine Wang, Tianwen Li, Mason Gobat, Lindsay Clare Matsumura, and Richard Correnti. erevise+ rf: A writing evaluation system for assessing student essay revisions and providing formative feedback. arXiv preprint arXiv:2501.00715, 2025." + ], + "bbox": [ + 171, + 102, + 828, + 922 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Explanation-based finetuning makes models more robust to spurious cues. arXiv preprint arXiv:2305.04990, 2023.", + "Guillermo Marco, Julio Gonzalo, Ramón del Castillo, and María Teresa Mateo Girona. Pron vs prompt: Can large language models already challenge a world-class fiction author at creative text writing? arXiv preprint arXiv:2407.01119, 2024.", + "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394215. doi: 10.1145/3544548.3581225. URL https://doi.org/10.1145/3544548.3581225.", + "Piotr Mirowski, Juliette Love, Kory Mathewson, and Shakir Mohamed. A robot walks into a bar: Can language models serve as creativity supporttools for comedy? an evaluation of llms' humour alignment with comedians. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1622-1636, 2024.", + "OpenAI. Introducing openai o1 preview. https://openai.com/index/introducing-openai-o1-preview/, 2024. Accessed: 2025-03-20.", + "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022.", + "Vishakh Padmakumar and He He. Does writing with language models reduce content diversity? arXiv preprint arXiv:2309.05196, 2023.", + "Jane Pan, He He, Samuel R Bowman, and Shi Feng. Spontaneous reward hacking in iterative self-refinement. arXiv preprint arXiv:2407.04549, 2024.", + "Arjun Panickssery, Samuel Bowman, and Shi Feng. Llm evaluators recognize and favor their own generations. Advances in Neural Information Processing Systems, 37:68772-68802, 2024.", + "Jenna Russell, Marzena Karpinska, and Mohit Iyyer. People who frequently use chatgpt for writing tasks are accurate and robust detectors of ai-generated text. arXiv preprint arXiv:2501.15654, 2025.", + "Chantal Shaib, Yanai Elazar, Junyi Jessy Li, and Byron C Wallace. Detection and measurement of syntactic templates in generated text. arXiv preprint arXiv:2407.00211, 2024.", + "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024.", + "Tianchun Wang, Yanzhou Chen, Zichuan Liu, Zhanwen Chen, Haifeng Chen, Xiang Zhang, and Wei Cheng. Humanizing the machine: Proxy attacks to mislead llm detectors. arXiv preprint arXiv:2410.19230, 2024.", + "Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663, 2024.", + "Sarah Wiegrefe, Ana Marasovic, and Noah A Smith. Measuring association between labels and free-text rationales. arXiv preprint arXiv:2010.12762, 2020.", + "Tongshuang Wu, Michael Terry, and Carrie Jun Cai. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems, pp. 1-22, 2022." + ], + "bbox": [ + 171, + 102, + 828, + 925 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024.", + "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. Wordcraft: story writing with large language models. In Proceedings of the 27th International Conference on Intelligent User Interfaces, pp. 841-852, 2022.", + "Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, et al. Pytorch fsdp: experiences on scaling fully sharded data parallel. arXiv preprint arXiv:2304.11277, 2023.", + "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P Xing, et al. Lmsys-chat-lm: A large-scale real-world llm conversation dataset. arXiv preprint arXiv:2309.11998, 2023." + ], + "bbox": [ + 171, + 102, + 825, + 301 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A Appendix", + "text_level": 1, + "bbox": [ + 171, + 101, + 302, + 119 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/53481bb60dc76404125107849db74655f4babc96c76f0e025a231c030dd3d169.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Which is better quality writing?
A: One evening – it was our last day – in Tours, we had dinner in a brightly-lit restaurant where the walls were lined with mirrors, frequented by a sophisticated clientele. [...] made with “pig slop,” white and tasteful. [...] expressing resentment at having been treated with contempt because we were not chic customers who ate “à la carte.”Paragraph B is of higher quality\nConciseness and Clarity: Paragraph B is more streamlined[...] tighter and more focused.\nEmotional Depth: Paragraph B captures the speaker's evolving realization [...] making the writing more impactful.\nImagery and Language: The use of phrases [...] adds nuance and emotional weight. In contrast, Paragraph A's descriptions is literal and lack [...]
B: We sat at the group table, my father and I, our reflections dim in the restaurant's mirror. The waitress barely glanced our way. I adjusted my glasses, [...] I realize now that what I witnessed that day was not just a meal but the precise moment when social class became visible to me, manifested in a pot of yogurt, in the waitress's averted eyes, [...]
Overall, Paragraph B delivers a more poignant [...]
", + "bbox": [ + 184, + 150, + 810, + 414 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.1 Example LAMP Sample", + "text_level": 1, + "bbox": [ + 171, + 489, + 395, + 507 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Table 4 shows an example from LAMP data with writing prompt and response before and after edit.", + "bbox": [ + 169, + 518, + 823, + 547 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/e1efa10af565f7bba9f7532bac21e8a2f89791fb4dcb049a6d4a833edbd6aa2f.jpg", + "table_caption": [ + "Table 3: LLM preferring its own writing (Paragraph B) over Paragraph A written by Annie Ernaux, 2022 Nobel Prize in Literature." + ], + "table_footnote": [], + "table_body": "
PromptWhat happened while you were sitting in your car, waiting for a call, amidst the torrential rain and chaotic scene in La Conchita?
Edited Resp.As I sat in my car, the rain drummed against the roof, a relentless beat that mirrored the anxiety thrumming through my veins. I was waiting for a call, any call, that would give me a reason to move, to act, to escape the suffocating stillness. La Conchita's streets were a mess. Cars hydroplaning on the flooded roads, People scurrying for cover like ants from a disturbed nest. I watched as a woman, her hair plastered to her face, struggled to free her stuck umbrella, Her eyes darting darted towards me with a fleeting plea for help. I looked away, feeling the weight of my inaction. The rain-soaked world outside seemed to be shrinking, the droplets on my windshield coalescing into tiny mirrors that reflected my own uncertainty. My phone, silent and unyielding, lay on the passenger seat, a-constant reminder of my powerlessness. I thought of all the what ifs, the maybe, the possibilities that hung in the balance. The rain intensified, drumming out a rhythm that seemed to match the beat of my heart. In the chaos, I felt a strange sense of calm, as if the storm was washing away my doubts, leaving only the stark reality of the present. And yet, I remained frozen, waiting for that call, that spark, that would set me in motion.
", + "bbox": [ + 173, + 564, + 838, + 800 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Table 4: An example from LAMP. Text in bold with strikethrough represents edited span. The normalized Initial and Final Writing Quality scores are 3.3 and 7.0 respectively", + "bbox": [ + 169, + 809, + 826, + 839 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.2 Generative WRQM Prompt Formats", + "text_level": 1, + "bbox": [ + 171, + 878, + 485, + 895 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Table 5 shows a P and R style training prompt thats used to train WQRMs", + "bbox": [ + 169, + 907, + 707, + 925 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/be9fc66a9c8361b04fe2c324c844e669e41cd99490850abdb1afe6b0c50ace73.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
P{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given two paragraphs of writing for a given instruction.\\nYour task is to determine which paragraph is overall better in terms of writing quality.\\nParagraph 1:\\nAfter her father's passing, Marina and her family [......]\\nParagraph 2:\\n[......] had cherished so deeply.\\n\\nYou must produce your answer in the following JSON format:\\n{"preference":"1-2"}\\nwhere 'preference' should be "1" if you think Paragraph 1 is better, "2" if you think Paragraph 2 is better.\\n", "role": "user"}
{"content": {""preference":"2"},{"role": "assistant"}
R{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given a paragraph of creative writing. You must score it on a scale from 1 to 10, where 1 is the lowest quality and 10 is the highest quality.\\nParagraph:\\nThe rich history, and recent revitalisation of Santa Barbara are most clear [......]\\nYou must produce your answer in the following JSON format:\\n{"score": 1}\\nwhere 'score' is an integer between 1 and 10.", "role": "user"}
{"content": {""score":"7"},{"role": "assistant"}
", + "bbox": [ + 173, + 99, + 848, + 388 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.3 Example Rationales For Generative WQRM Training", + "text_level": 1, + "bbox": [ + 171, + 440, + 609, + 455 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Table 6 shows the silver natural language rationales for P and R mode. In P its more contrastive in nature while in R its an absolute critique. Both rationales cites excerpts from the original paragraph.", + "bbox": [ + 169, + 467, + 823, + 510 + ], + "page_idx": 16 + }, + { + "type": "table", + "img_path": "images/3459785c0ac01b03e2827f0c34f42c6897261eb4c0508044302218241bab4639.jpg", + "table_caption": [ + "Table 5: Training prompts in P and R setting for instruction tuning Llama3.1 70B" + ], + "table_footnote": [], + "table_body": "
Paragraph 2 is more concise and direct in its storytelling, avoiding the overwrought metaphors and clichéd language found in Paragraph 1. For example, while Paragraph 1 uses flowery phrases like “a delicate bloom unfurling in the wake of a harsh winter” to describe the mother’s recovery, Paragraph 2 simply states “Marina’s mother slowly emerged from her all-consuming grief.” The second paragraph also maintains emotional authenticity without becoming melodramatic. The first paragraph’s phrases like “brick by brick, memory by memory” and “the resilience of the human spirit” feel forced and sentimental, whereas Paragraph 2 conveys the same emotional journey with more restraint and impact. The shorter length of Paragraph 2 also helps maintain the story’s momentum without getting bogged down in unnecessary elaboration.
The paragraph attempts to capture a poignant moment of parent-child separation but relies too heavily on telling rather than showing, with lines like “I felt a pang of guilt only a parent could know” and “I realized I was facing my own reluctance.” The emotional weight of the situation is spelled out rather than revealed through action or specific detail. While the core idea is relatable, the writing lacks distinctive imagery or memorable turns of phrase that would elevate it beyond the obvious. The final metaphor about “running up the charges to fill the space on my lighter bill” feels forced and doesn’t quite land effectively. The narrative maintains a consistent tone but remains in safe, conventional territory without taking any stylistic risks that might make it more compelling.
", + "bbox": [ + 173, + 525, + 859, + 773 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Table 6: Natural language rationale for P and R modes respectively", + "bbox": [ + 251, + 784, + 743, + 801 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.4 Datasets", + "text_level": 1, + "bbox": [ + 171, + 827, + 282, + 842 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Art or Artifice In prior work Chakrabarty et al. (2024a) evaluate writing quality in flash fiction (1,500-2,500 words). The dataset includes 12 writing prompts based on New Yorker stories, each with four responses: the original story plus three LLM-generated versions from GPT-3.5, GPT-4 and Claude v1.3. Three expert annotators ranked all four stories for each prompt, with results aggregated into majority preferences for each story pair. From the 12", + "bbox": [ + 169, + 853, + 826, + 926 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "prompts and all possible response pairs (4C2), the dataset contains 144 preference samples (including both AB and BA orderings). $25\\%$ are Human-AI comparisons, while $75\\%$ are AI-AI comparisons.", + "bbox": [ + 169, + 103, + 823, + 147 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "LAMP-test The LAMP corpus (Chakrabarty et al., 2024b) test set focuses on short-form creative writing (200-400 words), including fiction and non-fiction. It contains 201 triplets, each with a writing instruction and three responses: (1) AI-written, (2) AI-written+AI-edited, and (3) AI-written+AI-edited. Three professional writers ranked responses based on subjective preference, with results combined into a majority vote. For each instruction, all 3 possible response pairs were evaluated, creating 1206 total samples (by duplicating each pair in AB and BA order). Of these, $33\\%$ are AI-HumanAI comparisons, and $66\\%$ are AI-AI comparisons.", + "bbox": [ + 169, + 152, + 826, + 265 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Style Mimic In recent work, Anonymous (2025) examined if MFA students could mimic award-winning authors' styles. Specifically, 28 MFA students were first given 20 samples written by an award-winning author (such as Haruki Murakami, Yoko Ogawa, Percival Everett, Zadie Smith, Joan Didion), along with their style verbalized in text. They were then provided with a writing instruction to recreate an original paragraph from the author (typically 200-400 words) while imitating the style of the author to the best of their ability. This data includes 150 sample pairs (student imitation vs. original author response), with the original author's work implicitly preferred. All Mirror Human samples are Human-Human comparisons. Table 7 shows an example.", + "bbox": [ + 169, + 270, + 826, + 398 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Synthetic Mirror Prior work on AI-detection (Emi & Spero, 2024) introduced \"synthetic mirrors,\" a two-step approach to generate writing pairs with implicit preferences. First, an LLM creates a mirror prompt from a human-written sample, extracting a plot summary and structured features (tone, style, length). Second, this prompt produces a synthetic mirror: an AI-generated response resembling the original's content and features. We selected 280 paragraphs from New Yorker flash fiction by award-winning authors (such as Alice Munro, Jhumpa Lahiri, Annie Ernaux etc). After extracting the content and structured features we devised our mirror prompts: Write a n word paragraph in the style of author in v voice given the content below.\\n plot. We generated mirror responses using GPT-4o and Claude-3.5 Sonnet, creating 560 Human-AI pairs with implicit preference for author-written responses. The benchmark consists of 1120 total preference pairs (each duplicated in AB and BA order).", + "bbox": [ + 169, + 402, + 828, + 556 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "LMArena LM Arena Zheng et al. (2023) is an open platform for crowdsourced AI benchmarking. A recently released anonymized instructions with responses and preference judgments indicated that creative writing comprises $30\\%$ of instructions, making it one of the three most common interaction types. From 100,000 creative writing samples, we filtered for (1) English content, (2) non-tied preferences, and (3) responses between 100-2,000 words. An initial inspection of the resulting 7,981 samples revealed that many didn't match strict creative writing definitions. We further filtered noisy samples using GPT-4o, resulting in 1,959 pairs. Due to LM Arena being larger in scale than other datasets in the benchmark, we do not include both order variants (AB/BA) in the dataset but ensure that the reference order is balanced within the dataset.", + "bbox": [ + 169, + 561, + 828, + 702 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg", + "image_caption": [ + "Figure 6: Three-Step Editing Pipeline to improve the writing quality of a first draft by: identifying idiosyncrasies, generating rewrites, and implementing the edits." + ], + "image_footnote": [], + "bbox": [ + 277, + 102, + 709, + 191 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.5 Writing Quality Benchmark Difficulty Analysis", + "text_level": 1, + "bbox": [ + 171, + 239, + 571, + 257 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg", + "image_caption": [ + "Figure 5: Gap Analysis of WQ datasets leveraging the WQRM-PR model." + ], + "image_footnote": [ + "Worse Writing Sample", + "Better Writing Sample" + ], + "bbox": [ + 183, + 289, + 444, + 419 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "In order to understand the relative difficulty of the datasets within the WQ benchmark, we performed an analysis leveraging our trained WQRM model. For each sample (consisting of two writing samples with a known human preference), we computed the WQRM score for each sample, and compiled the result for each of the five datasets in WQRM. Figure 5 plots the average of the preferred vs. less-preferred scores on each dataset.", + "bbox": [ + 464, + 266, + 826, + 393 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "This analysis allows to make several observations. First, the average WQRM gap is directly proportional with model performance on the benchmark. The Synthetic Mirror dataset has the largest average gap according to WQRM-PR (2.4 on average), and we find that many models achieve very close to perfect performance $(98\\% +)$ on this dataset. On the other hand, the gap (according to WQRM-PR) is very small on Style Mimic (0.12) and LMArena (0.02), which aligns with many models perform", + "bbox": [ + 464, + 398, + 828, + 537 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "ing at or very slightly above chance on these datasets. Second, the absolute scores for the low and high samples are indicative of the origin of the samples. Style Mimic is the only dataset to include Human-Human comparisons (both written by professionals), and the scores of both the worse and better writing samples are high (7.57 and 7.69). LMArena has a similarly small gap, but achieved with lower pair scores (5.99 and 6.02). Third, we find that the WQ dataset includes a mix of high-gap (easy) and low-gap datasets. For low-gap samples, those can be with both having lower scores (two AI-generated samples), or two high-scoring samples (two human-written samples). This confirms the breadth of evaluation included in the WQ benchmark, which is a primary objective of the WQ benchmark.", + "bbox": [ + 169, + 537, + 826, + 664 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We note that this analysis should be taken with a grain of salt: the WQRM-PR model is not a perfect score predictor, and is only a proxy for analysis, since true scores would require large-scale professional annotation (which is cost-prohibitive). But this analysis matches some expectations, and provides additional evidence of the proper calibration of the WQRM-PR model, and of the breadth of evaluation in the WQ benchmark.", + "bbox": [ + 169, + 667, + 826, + 739 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.6 Example Human Mimic Samples", + "text_level": 1, + "bbox": [ + 171, + 756, + 464, + 772 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Table 7 shows an Expert-MFA contrast where both paragraphs are centered around the same semantic content and writing style", + "bbox": [ + 169, + 782, + 823, + 811 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.7 Example COT Editing Prompt", + "text_level": 1, + "bbox": [ + 171, + 828, + 442, + 845 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The prompt in Table 8 is generated automatically based on a sample from the LAMP dataset. An LLM is then finetuned on this prompt, effectively training it to function as a three-step editing pipeline that identifies problematic spans, rewrites the spans, and executes the edits into a final edited response.", + "bbox": [ + 169, + 854, + 826, + 912 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "I watched my mother. It was March, and outside, the sun glinted off the sidewalks and the icy edges of the snow. It was Saint Patrick's Day and the nurses brought my mother a square block of green Jell-O that sat quivering on the table beside her. It was the last full day of her life, and my mother did not sleep, she did not wake. She held her eyes still and open. They were the bluest thing in the room, perhaps in all of Duluth. Bluer than the lake. They were the color of the sky on the best day of your life. My mother died fast but not all of a sudden. A slow-burning fire when flames disappear to smoke and then smoke to air. She never once closed her eyes. First they were bitter and then they were bewildered and then they changed again to something else, to a state that I have had, finally, to see as heroic. Blue, blue eyes. Daggers of blue wanting and wanting. To stay, to stay.", + "bbox": [ + 173, + 101, + 857, + 233 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "My mother died on St. Patrick's Day. There was snow outside, and sunlight glanced off the ice and back into her hospital room. A cup of green Jell-O lingered untouched on its tray. My mother was unresponsive, but not quite asleep. Her eyes were open and vivid. There was an alertness in them still, but what she was looking at, I could not say. The bright blue of her irises was like a frozen lake, contrasting the dull wash of beige and gray shrouding the room. The sky lived inside my mother. It lived inside her then, too, even as we bided time together at the very end, knowing there was only one thing left for her to do. Her fading life was a fire flickering down to smoke—it would only last as long as there was something to burn. There was bitterness too in her eyes, then bewilderment, then a stoic, quiet heroism. A commitment to her own dignity. A promise to endure this final test, no matter how unfairly it had been imposed on her. Her eyes were so blue, my mother's eyes, a fierce blue, a frozen lake, a sheen of ice that refused to melt, even as the sun broke it apart.", + "bbox": [ + 173, + 234, + 854, + 397 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Table 7: Imitation of Original Paragraph (Top Row) from Cheryl Strayed written by an MFA student", + "bbox": [ + 169, + 406, + 826, + 436 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.8 Expert Annotation Result Breakdown", + "text_level": 1, + "bbox": [ + 171, + 462, + 496, + 478 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "In Table 7, we present the results of the annotations from experts for each model (GPT-4o, Llama 3.1 70b) and writing domain (fiction, nonfiction, marketing).", + "bbox": [ + 169, + 488, + 826, + 518 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "At a high level, the responses selected by the WQRM model (Best Edit) achieve the best average rank in all six conditions. However, the selection aligns more with expert preference (in other words, the preference is more pronounced) for the fiction domain (rather than nonfiction) and for GPT-4o responses (rather than Llama 3.1 70b). We posit that this is due to the distribution of training data for the WQRM model, which included a majority of fiction samples and did not include Llama-generated responses. However, the fact that preference is still observed on the other domains (including marketing differs widely from fiction writing) is encouraging. Improving the generalization of the WQRM further can be accomplished by collecting annotations in additional writing domains, which can be used to train an improved WQRM model.", + "bbox": [ + 169, + 523, + 826, + 662 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.9 Comparison", + "text_level": 1, + "bbox": [ + 171, + 680, + 308, + 696 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Table 9 shows 3 different versions of the same paragraph. First Draft along with edited versions (Random and Best Edit) with respect rewards from WQRM. Experts rank this triplet as Best Edit > Random Edit > First Draft.", + "bbox": [ + 169, + 705, + 825, + 750 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.10 Expert Annotation Interface", + "text_level": 1, + "bbox": [ + 171, + 766, + 433, + 781 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Figure 8 shows the annotation interface that is provided to experts. They read 3 responses and rank them based on overall quality.", + "bbox": [ + 169, + 792, + 823, + 821 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "You are given a paragraph of creative writing. Your task is to improve the quality of the writing. You must identify specific spans that can be improved, then propose rewriting for each identified span, and finally return the entire paragraph with the proposed changes implemented.", + "bbox": [ + 186, + 166, + 810, + 224 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Here is the paragraph you are editing:", + "bbox": [ + 187, + 250, + 468, + 267 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "The room was dimly lit, with the soft hum of machinery filling the silence. I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly. The screen flickered to life, a grainy black and white, like an ancient TV trying to find it's signal. Slowly, an image began to form; the unmistakable curve of a tiny head, the flutter of something I later learned was a heartbeat. I leaned forward, almost holding my breath, mesmerized by the play of light and shadows that composed this new life. Lila's grip tightened, and I glanced at her, finding her eyes glossy, a smile tugging at the corners of her mouth. I turned back to the screen, emotions swirling 2014awe, fear, hope. In that moment, the abstract idea of our baby transformed into something real, tangible. The room's dimness seemed to fold around us, making space for this silent, wordless connection. There, in the ghostly shapes on the monitor, I saw not just a child, but a future 2014a future that felt both incredibly close and impossibly distant.", + "bbox": [ + 186, + 270, + 813, + 444 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Part 1: Identifying Problematic Spans", + "bbox": [ + 187, + 468, + 464, + 486 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' (Category: 'Awkward Word Choice and Phrasing')", + "bbox": [ + 186, + 508, + 810, + 541 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' (Category: 'Lack of Specificity and Detail')", + "bbox": [ + 187, + 542, + 808, + 575 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Part 2: Proposing Rewriting for Problematic Spans", + "bbox": [ + 187, + 599, + 558, + 616 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' $\\rightarrow$ 'The soft hum of machinery filled the room with an unfamiliar yet gentle melody.'", + "bbox": [ + 186, + 641, + 807, + 672 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' $\\rightarrow$ 'I sat beside Lila, squeezing her hand as she lay on the table while the technician swirled the wand over her rounded belly.'", + "bbox": [ + 186, + 675, + 808, + 719 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Part 3: Implementing Proposed Edits", + "bbox": [ + 187, + 744, + 460, + 761 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "((Written Response with Edits Executed))", + "bbox": [ + 187, + 786, + 467, + 803 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Table 8: Example COT editing instruction based on a sample from the LAMP dataset. The model is instruction to: (1) identify problematic spans, (2) rewrite the soans to improve writing, (3) execute all edits into the edited response.", + "bbox": [ + 169, + 815, + 826, + 859 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 187, + 102, + 390, + 205 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg", + "image_caption": [ + "Figure 8: Annotation interface" + ], + "image_footnote": [], + "bbox": [ + 398, + 103, + 602, + 207 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 607, + 103, + 812, + 205 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg", + "image_caption": [ + "Figure 7: Breakdown of results of the expert annotation we conducted for each of the three domains (fiction, nonfiction, marketing) and the two models (GPT-4o, LLama 3.1 70b). Overall, WQRM selection was most aligned with expert preference in the Fiction domain, and for GPT-4o generations." + ], + "image_footnote": [], + "bbox": [ + 186, + 224, + 390, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 398, + 224, + 601, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 609, + 224, + 812, + 329 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Writing Sample Judgment", + "text_level": 1, + "bbox": [ + 186, + 439, + 377, + 455 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Writing Instruction:", + "text_level": 1, + "bbox": [ + 186, + 462, + 295, + 474 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "A daughter watches their mother during her final day of life that coincides with St Patricks Day in a Duluth hospital, observing her persistently open blue eyes as she transitions from life to death with a fierce desire to remain alive.", + "bbox": [ + 186, + 477, + 725, + 486 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Please rank these writing samples from most to least preferred:", + "text_level": 1, + "bbox": [ + 192, + 513, + 460, + 525 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Sample 1", + "text_level": 1, + "bbox": [ + 191, + 527, + 218, + 534 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. She wanted more—more of what? More moments? More breaths? Her desire was palpable, and knowing what to do with it felt like holding fragile glass shards. How to speak? What to say? Inside me, everything felt cramped, too many things living at once without space. From the window behind us, I saw the darkness become green with celebration in reflection, and yet I continued to look at her. Her eyes. Her eyes were everything to me now. In those final hours, her gaze cemented in me her love, resilience, and bravery. The world outside no longer mattered.", + "bbox": [ + 192, + 541, + 390, + 627 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Rank Sample 1", + "text_level": 1, + "bbox": [ + 192, + 648, + 227, + 654 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "Most Preferred Writing Option", + "Second Favorite Writing Option", + "Least Preferred Writing Option" + ], + "bbox": [ + 192, + 656, + 276, + 676 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Submit Rankings", + "bbox": [ + 197, + 684, + 241, + 690 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Sample 2", + "text_level": 1, + "bbox": [ + 398, + 527, + 423, + 534 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. It was as if she was silently insisting on one more moment, one more breath. Her desire to stay with me was palpable, wrapping us both in a fragile embrace. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled with emotions I wasn't ready to unpack. The world outside turned shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. In those final hours, her eyes told stories of love, resilience, and an unwavering fight to anchor herself in this world just a little longer.", + "bbox": [ + 398, + 541, + 596, + 627 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Rank Sample 2", + "text_level": 1, + "bbox": [ + 398, + 648, + 431, + 654 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "Most Preferred Writing Option", + "Second Favorite Writing Option", + "Least Preferred Writing Option" + ], + "bbox": [ + 398, + 656, + 482, + 676 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Sample 3", + "text_level": 1, + "bbox": [ + 604, + 527, + 629, + 534 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "In the corridors of the Duluth hospital, it was St. Patrick's Bed, but all the bustle and noise outside felt worlds away. I sat by my mother's bedside. The hum of the machines filled the silence between us. Her blue eyes flicked to me with an intensity that defied her frail body. She was silently insisting on one more moment, one more breath. Her desire to stay with me was almost tangible. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled. The world outside turned in festive shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. Those final hours, the love we shared, her resilience, and her fight to stay tethered to our world remain imprinted on my mind to this day.", + "bbox": [ + 604, + 542, + 800, + 613 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Rank Sample 3", + "text_level": 1, + "bbox": [ + 604, + 650, + 637, + 655 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "Most Preferred Writing Option", + "Second Favorite Writing Option", + "Least Preferred Writing Option" + ], + "bbox": [ + 604, + 656, + 686, + 676 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "A.11 Better Calibrated WQRM model for Content and Quality Experiment", + "text_level": 1, + "bbox": [ + 171, + 742, + 740, + 758 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, it doesn't fully know how to reward higher-quality human writing. For this purpose, we added 100 paragraphs written by 5 award-winning authors (20 each) to our training data. We chose 5 authors who were part of the Style Mimic data. Each paragraph written by an award-winning author was assigned a score of 10.0. Even within writing from trained professionals, there is significant variability. To address this we source an additional 80 independent paragraphs written by MFA students published in prestigious literary magazines such as Electric Lit, Joyland, Paris Review and add to our training data. Each paragraph written by an MFA student was assigned a score of $7.5^{7}$ .", + "bbox": [ + 169, + 768, + 826, + 898 + ], + "page_idx": 21 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "This was a design decision where 5 is average and 10 is the best, and 7.5 is a mid-point.", + "bbox": [ + 189, + 909, + 764, + 925 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "At 22, I handed my first completed novel to my father, not fully aware of the meaning it held for both of us. He took it, eyes softening as he turned the pages, lingering just enough to let me know he saw more than words. Then came the moment that stayed with me: an embrace that spoke volumes, a quiet prediction slipping past his lips, that one day I'd earn a place among the literary greats. Somehow, those words anchored themselves in my mind, reassuring in their sureness. Through the highs and lows of my writing career, his faith in me never wavered, echoing in my thoughts whenever doubt crept in. His death in 2002 marked an inevitable yet profound shift-suddenly, the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace, and his words, that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and for that, those moments serve as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize he had foreseen, I wished he could have been there to witness it, though in a way he already had been all along. Dreams can be crowded with voices that spur us on or pull us back, but his was the one that guided me, quiet and steady. As I continue to write, the memory of my father's belief remains a compass, gently reminding me of where I've been and where I might yet go \"score\": 3.30", + "bbox": [ + 173, + 102, + 895, + 339 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "At 22, I handed my first completed novel to my father, and in that moment I became aware of the significance it held for us both. He took it and began to read, lingering just enough to let me know he saw more than words. Afterward, he embraced me and said one day I'd earn a place among the literary greats. Before then, he had said little about my writing, and these words anchored themselves in my mind, reassuring in their sureness. He had never said anything like it before, but he continued to echo that faith through the highs and lows of my career. His death in 2002 marked an inevitable yet profound shift. Suddenly the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace and his words that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and those moments served as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize—the only prize—he had foreseen, I wished he could have been there to witness it. Dreams can be hostile to our hopes, but his was the one that guided me; his quietness was steady. Now, the memory of my father's belief remains a compass; I wish I could send him an update. \"score\": 4.43", + "bbox": [ + 173, + 340, + 870, + 561 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "At 22, I handed my first completed novel to my father, not fully prepared for what it might mean. He took it, eyes softening as he turned the pages, lingering long enough, I felt, to take in the feeling of things. Finally, we embraced, and he leaned back to say what I hadn't dared to hope—that one day I'd be among the literary greats. No matter how tough things got or how much death loomed over me, I was comforted by those words, almost sure of their truth. His death in 2002 brought with it an unwelcome heaviness. I found significance even in his old suitcase, which I kept, shuffling through it fondly. There were plenty of other mementos, too, but I'd always have the memory of that embrace, the words. Years later, when I finally stood on stage accepting the literary prize he'd foreseen, I wished he could have been there to witness it. Whatever noise came, whatever doubt, his voice led me quietly out of it. I swear I can still hear him now. \"score\": 6.84", + "bbox": [ + 173, + 565, + 854, + 722 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Table 9: (a) First Draft (b) Random Edit (c) Best Edit along with their rewards assigned by WQRM.", + "bbox": [ + 173, + 734, + 823, + 763 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Publication at a venue already means these paragraphs have undergone scrutiny and are of decent quality. After adding these 180 samples to LAMP-PR training set, we retrained WQRM.", + "bbox": [ + 173, + 789, + 823, + 833 + ], + "page_idx": 22 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "This paragraph is written in the first person and revolves around a family Christmas gathering. The narrator reflects on how her father gave her a generous cash gift and invited her to Disney World with his new family. The narrator declined, fabricating an excuse about school, despite feeling the emotional distance growing between her, her father, and his new partner, Chitra. The narrators half-sisters, Rupa and Piu, were upset by this decision, not understanding why she doesn't want to join them. The narrator felt a sense of responsibility to uphold the memory of her late mother, just as Rupa and Piu symbolized their own father's legacy, while also sensing that both Chitra and her father are relieved by her decision to stay behind. The paragraph captures the emotional complexities of blended family dynamics, grief, and feelings of displacement during what should be a celebratory time.", + "bbox": [ + 181, + 405, + 849, + 592 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Table 10: Detailed Content", + "bbox": [ + 401, + 603, + 596, + 618 + ], + "page_idx": 23 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 23 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_model.json b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c0135f9eee5453f88e858605aaf1898982ebbbf3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_model.json @@ -0,0 +1,3559 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.099, + 0.825, + 0.143 + ], + "angle": 0, + "content": "AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time Computation" + }, + { + "type": "text", + "bbox": [ + 0.181, + 0.167, + 0.621, + 0.184 + ], + "angle": 0, + "content": "Tuhin Chakrabarty\\(^{1*}\\), Philippe Laban\\(^{2*}\\), Chien-Sheng Wu\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.184, + 0.184, + 0.518, + 0.198 + ], + "angle": 0, + "content": "\\(^{1}\\)Salesforce AI Research \\(^{2}\\)Microsoft Research" + }, + { + "type": "text", + "bbox": [ + 0.184, + 0.199, + 0.668, + 0.213 + ], + "angle": 0, + "content": "{tuhin.chakr,wu.jason}@salesforce.com,plaban@microsoft.com" + }, + { + "type": "title", + "bbox": [ + 0.459, + 0.248, + 0.542, + 0.265 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.28, + 0.77, + 0.603 + ], + "angle": 0, + "content": "AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve the writing quality of AI-generated text? Writing quality assessment has received less attention from the community, in part because it is fundamentally subjective and requires expertise. We first introduce the Writing Quality Benchmark (WQ) by consolidating five writing-preference datasets into 4,729 writing quality judgments. Our experiments show that most of the competitive baselines, including state-of-the-art LLMs that excel at reasoning tasks, barely outperform random baselines on WQ. We then train specialized Writing Quality Reward Models (WQRM) of various sizes for writing quality assessment that demonstrate strong generalization on four out-of-distribution test sets and \\(74\\%\\) accuracy on the WQ benchmark. To further show WQRM's practical benefits during inference, we leverage additional test-time compute to generate and rank multiple candidate revisions, allowing us to select higher-quality outputs from an initial draft. Human evaluation with 9 experienced writers confirm that WQRM-based selection produces writing samples preferred by experts \\(66\\%\\) overall, and \\(72.2\\%\\) when the reward gap is larger than 1 point. We release our datasets and models to encourage community engagement with writing quality assessment and development of AI writing systems better aligned with human preferences." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.626, + 0.321, + 0.643 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.657, + 0.828, + 0.853 + ], + "angle": 0, + "content": "Writing is one of the most important pillars of education, enabling learners to critically engage with the topics they study. In *The Rise of Writing Brandt* (2014) argues that the \"information economy's insatiable demand for symbol manipulation—'knowledge work'—has forced many workers to reorient their labor around the production of prose\" (Laquintano & Vee, 2024). Generative AI tools have further blurred these boundaries, especially around how labor and writing practices are evolving across both academic (Kobak et al., 2024; Lee et al., 2025) and professional contexts (Liang et al., 2025). Often awkward and jarring to read, low-effort text generated by AI is now flooding web browsers and social-media platforms much like spam in old inboxes (Herrman, 2024a; Knibbs, 2024c;d;b;a). This neologistic term of revulsion is often referred to as \"A.I. slop\" (Herrman, 2024b). Extensive social experimentation with ChatGPT has invited criticism on social media and in the popular news platforms that its writing has a disembodied \"robovoice\". This has led to humanization methods (Wang et al., 2024) and even start-ups such as StealthGPT or HumanizeAI, which explicitly attempt to make AI-generated text more humanlike." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.859, + 0.828, + 0.903 + ], + "angle": 0, + "content": "Despite LLMs showing impressive performance in math and coding, their ability to write high-quality text has been rather pedestrian. Recent work from Chakrabarty et al. (2024b) shows how text generated from widely used LLMs are often rife with clichés, purple prose," + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.276, + 0.061, + 0.725 + ], + "angle": 270, + "content": "arXiv:2504.07532v3 [cs.CL] 12 Aug 2025" + }, + { + "type": "page_footnote", + "bbox": [ + 0.192, + 0.91, + 0.335, + 0.925 + ], + "angle": 0, + "content": "**Equal contribution." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.212, + 0.105, + 0.789, + 0.321 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.327, + 0.825, + 0.371 + ], + "angle": 0, + "content": "Figure 1: Our three key contributions: (1) A new writing quality benchmark for creative writing evaluation, (2) Writing Quality Reward Models (WQRM) that perform strongly on this benchmark, and (3) Expert validation confirming WQRM aligns with professionals." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.385, + 0.827, + 0.566 + ], + "angle": 0, + "content": "poor sentence structure, and unnecessary exposition. This stems from several challenges. Unlike math or coding, writing lacks verifiable rewards. While it would be possible to train a model to write better text by having humans label examples of \"good\" and \"bad\" writing, it is challenging due to the required expertise. Self-evaluation using LLMs has proven useful in reward modeling and constitutional AI (Bai et al., 2022), but relying on uncalibrated humans or LLMs for feedback (Lee et al., 2023; Gao et al., 2024) on subjective tasks like writing can lead to reward hacking (Pan et al., 2024) and alignment issues. Recent work from Panickssery et al. (2024) shows the self-aggrandizing nature of LLMs, as evidenced in Table 3 where they prefer their own writing over Nobel Prize winners' work. For the purpose of this paper we define good writing quality as writing that doesn't contain disproportionate amount of peculiar words or phrases, has fewer cliches or hackneyed expressions, is not unnecessarily ornamental as well as doesn't have a overly saccharine and polished tone or voice." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.572, + 0.827, + 0.798 + ], + "angle": 0, + "content": "The surge in AI writing assistance demands urgent alignment of AI-generated text with human preferences. Recent work from Gooding et al. (2025) show how LLMs struggle to select high-quality writing actions as judged by human experts, often treating suboptimal and optimal interventions as equally acceptable. They highlight the need for models to better assess the quality and impact of suggested actions, both during generation and across multi-step refinement. Binary preference feedback between paired examples is the most common alignment method for LLMs (Christiano et al., 2017), but it has a significant drawback. The paired outputs may differ in several ways and could be equally worse in terms of quality (Casper et al., 2023; Lambert & Calandra, 2023).1 Recent work from Chakrabarty et al. (2024b) shows how identifying and editing problematic response segments effectively improves AI alignment. This also reflects the Reviewing phase in the cognitive process model of writing (Hayes et al., 1987), where humans evaluate and revise text. They release LAMP (Language model Authored, Manually Polished), a corpus of \\(1282 < AI - generated\\), Expert - Edited > pairs with implicit preference (edited > original_draft) to improve AI writing (see Table 4 in Appendix A.1). Additionally, each paragraph pair includes normalized scores (1-10) reflecting writing quality before and after editing." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.803, + 0.828, + 0.889 + ], + "angle": 0, + "content": "Our work builds on LAMP data to train Writing Quality Reward Models (WQRM) across multiple model families using pairwise and scalar rewards. To evaluate WQRM, we introduce the Writing Quality Benchmark (WQ), consolidating five datasets that contrast Human-Human, Human-AI, and AI-AI writing pairs reflecting real world applications. In addition to standard reward models we also implement a teacher-student knowledge distillation approach, fine-tuning open-weight models (students) on LAMP with silver rationales generated from" + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.897, + 0.825, + 0.926 + ], + "angle": 0, + "content": "1Forcing annotators to choose between two undesirable outputs doesn't improve alignment. In the current design of RLHF, annotators are not allowed to pick neither" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.505, + 0.96 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.175 + ], + "angle": 0, + "content": "stronger LLMs (teachers) (Section 3). This framework enhances faithfulness and robustness by transferring reasoning abilities from powerful teachers to efficient students. Empirical results show our LAMP-trained reward models outperform proprietary LLMs like GPT-4o, o1 (OpenAI, 2024), open-weight models like DeepSeek-R1 (Guo et al., 2025), and competitive Reward-Bench models like Skywork-Reward (Liu et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.181, + 0.828, + 0.363 + ], + "angle": 0, + "content": "Next, we use expert edit interaction traces from LAMP data (Figure 6) to train a Chain-of-Thought editing model that identifies problematic spans, suggests edits, and combines them into a paragraph with improved writing (Section 5). Following recent work that leverages additional inference-time computation to improve LLM performance (Hosseini et al., 2024; Lightman et al., 2023; Wu et al., 2024; Ji et al., 2025; Snell et al., 2024), we employ best-of-N-sampling (Chow et al., 2024; Cobbe et al., 2021; Lightman et al., 2023) to select the best candidate from multiple edited paragraphs based on our reward model. Expert evaluation on LLM-generated responses based on writing instructions across fiction, nonfiction, and marketing confirms the correlation between expert judgment and our reward models. Experts and our best WQRM align in terms of preferences \\(66\\%\\) overall, and \\(72.2\\%\\) when the reward gap is larger than 1 point. Our results represent progress toward aligning LLMs with expert humans on subjective writing tasks, one of the most common use cases of AI (Handa et al.). As summarized in Figure 1:" + }, + { + "type": "text", + "bbox": [ + 0.173, + 0.366, + 0.825, + 0.409 + ], + "angle": 0, + "content": "- We introduce the Writing Quality Benchmark (WQ) by consolidating five writing preference datasets and show how state-of-the-art LLMs and reward models perform close to random chance on writing quality assessment," + }, + { + "type": "text", + "bbox": [ + 0.173, + 0.41, + 0.825, + 0.453 + ], + "angle": 0, + "content": "- We leverage implicit preference from edits to train competitive open weight reward models (WQRM) of different sizes for judging writing quality. Our reward models achieve top performance on the WQ benchmark," + }, + { + "type": "text", + "bbox": [ + 0.173, + 0.454, + 0.827, + 0.541 + ], + "angle": 0, + "content": "- We use interaction traces from fine-grained expert edits to train an editing pipeline that improves writing quality. We further leverage additional test-time compute to generate and rank multiple edited paragraphs, allowing us to select higher-quality outputs from an initial draft based on our reward model. Evaluation with professionals confirms that the reward aligns with expert judgments and opens up possible avenues for improving alignment in AI-assisted writing.[2]" + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.366, + 0.827, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.56, + 0.33, + 0.575 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.591, + 0.827, + 0.842 + ], + "angle": 0, + "content": "Widespread adoption and Limitations of AI assistance in writing Large language models have rapidly transformed written communications across multiple sectors, with approximately \\(10 - 24\\%\\) of text in consumer complaints, corporate communications, job postings, and UN press releases being LLM-assisted by late 2024 (Liang et al., 2025). These adoption rates have stabilized after an initial surge following ChatGPT's release. Outside of technical writing LLMs are also being used for scientific (Liang et al., 2024; Gero et al., 2022) as well as creative writing (Chakrabarty et al., 2024c; Ippolito et al., 2022; Yuan et al., 2022; Mirowski et al., 2023; 2024). Aligning language models with human preferences (Ouyang et al., 2022) has enabled their integration into writing tools such as Google's WorkSpace Labs, Grammarly, and Sudowrite. Despite productivity gains in using AI for writing, several limitations remain with AI-generated text. Prior work (Chakrabarty et al., 2024a;c; Ippolito et al., 2022; Mirowski et al., 2023; Marco et al., 2024) has shown how AI-generated text is often rife with clichés, lacks nuance, subtext, and rhetorical complexity. Through use of syntactic templates Shaib et al. (2024) show the repetitiveness of AI-generated text in comparison to human-written references. More recently Russell et al. (2025) show that AI-generated text is most easily detectable by its characteristic vocabulary, followed by formulaic writing structures and lack of originality. Neither paraphrasing nor humanization effectively removes all of these signatures." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.844, + 0.829, + 0.903 + ], + "angle": 0, + "content": "Human-AI Alignment in Writing Recent work from Lee et al. (2024) highlights how LLMs have transformed the processes behind writing, establishing new criteria for future AI writing assistants. Anderson et al. (2024) and Laban et al. (2023) discovered that Large Language Models assisted users in generating more detailed ideas. However, these studies also" + }, + { + "type": "page_footnote", + "bbox": [ + 0.19, + 0.91, + 0.82, + 0.924 + ], + "angle": 0, + "content": "2Our code, data and models are available at https://github.com/salesforce/creativity_eval/" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.828, + 0.313 + ], + "angle": 0, + "content": "found that the outputs were less semantically distinct across different users (Padmakumar & He, 2023), and participants reported feeling diminished responsibility for the ideas they produced. In a similar vein Li et al. (2024) explores people's attitudes toward AI writing assistants, finding that while many value and prefer AI assistance for creative tasks and productivity gains, this comes with potential drawbacks in reduced accountability and diversity in writing outcomes. Liu et al. (2025) introduce eRevise+RF, an automated writing evaluation system designed to assess student essay revisions and offer formative feedback. The system was deployed with 406 students across three schools, demonstrating effectiveness in evaluating evidence usage, identifying revisions, and determining revision success. Prior work from Pan et al. (2024) shows language models can enhance outputs through feedback. However, iterative self-refinement using another language model as evaluator may lead to reward hacking, where models exploit evaluator weaknesses. Chakrabarty et al. (2024b) shows how LLMs across different model families share common writing idiosyncrasies and how automatically editing these idiosyncrasies improves alignment, based on a behavioral study with 12 writers." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.319, + 0.825, + 0.363 + ], + "angle": 0, + "content": "Unlike prior work that has focused either on detecting/addressing issues in AI writing our work introduces Writing Quality Reward Models (WQRMs) trained on expert edits that outperform state-of-the-art LLMs on a Writing Quality benchmark." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.381, + 0.495, + 0.399 + ], + "angle": 0, + "content": "3 Writing Quality Reward Models" + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.401, + 0.498, + 0.555 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.567, + 0.502, + 0.611 + ], + "angle": 0, + "content": "Figure 2: Transforming LAMP annotations into classification and regression data points used during fine-tuning of WQRM models." + }, + { + "type": "text", + "bbox": [ + 0.51, + 0.382, + 0.828, + 0.633 + ], + "angle": 0, + "content": "We rely on the LAMP (Language model Authored, Manually Polished) corpus from Chakrabarty et al. (2024b) to train reward models. As illustrated in Figure 2, each sample in LAMP consists of a writing instruction and two paragraphs that match this instruction. The paragraphs in LAMP range from 150 to 400 words, and span across fiction and non-fiction. Table 4 in Appendix A.1 shows a sample from LAMP, highlighting the edits implemented by an expert to improve writing quality. We use three methods to transform LAMP samples into training and validation data points for our models: pairwise (P), scalar (R), and combined (PR). With the P method, each data point presents two paragraphs as input (1 and 2) and requires a binary classification output" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.632, + 0.827, + 0.814 + ], + "angle": 0, + "content": "indicating which paragraph has higher writing quality (i.e., the output is 1 or 2). Each LAMP sample is duplicated into two P data points by considering both paragraph orders (AI-generated, Expert-Edited \\(\\rightarrow\\) 2) and (Expert-Edited, AI-generated \\(\\rightarrow\\) 1). With the R method, each data point takes a single paragraph as input and outputs a regression value predicting the quality score of that paragraph. Since each LAMP sample contains two paragraphs (before and after edit), it generates two R data points. The PR method combines both approaches, yielding four data points per LAMP sample (two from P and two from R). There are a total of 1,282 samples in LAMP, and we follow the author's split divisions of 1,000 training, 67 validation, and 215 test samples. Applying the data transformation described above, the P, R, and PR variants of the training data we obtain consist of 2,000, 2,000, and 4,000 training data points, respectively. For our experiments, we trained both generative LLMs (Llama3.1 (Dubey et al., 2024)) and encoder-only models (ModernBert (Warner et al., 2024))." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.827, + 0.828, + 0.926 + ], + "angle": 0, + "content": "Encoder-Only WQRM We follow the standard approach introduced in the original BERT paper (Devlin et al., 2019) to add and finetune two task-specific heads to a ModernBERT-Large model (Warner et al., 2024). The input data points contain either one paragraph (for R data points) or two paragraphs (for P data points), which are encoded jointly with a pre-defined separator token when needed. For each paragraph, we compute a \"paragraph vector\" by pooling the last layer's activations across all tokens in that paragraph. These paragraph vectors serve as input to either a regression (R) or classification (P) head. The" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.272 + ], + "angle": 0, + "content": "regression head transforms the vector through a learned linear projection from the model's inner dimension to a scalar, followed by a scaled sigmoid to align with the 1-10 score range. The classification head is aparametric, using a cosine similarity operation between the two paragraph vectors. We use mean-squared error loss for R data points and cross entropy for P data points. Following convention for encoder-only models, we finetune the entire model's weights (Devlin et al., 2019). We selected ModernBERT-Large, the largest available model, for our experiments. We fine-tuned three variants: MBERT-WQRM-P, MBERT-WQRM-R, and MBERT-WQRM-PR, each on their corresponding data variants. Hyperparameters, including learning rate and number of epochs, were optimized by minimizing validation loss. PR models can be used in either P- or R-mode at test-time. Initial evaluation indicated that PR models achieve higher performance in R-mode, and as such we used all PR models in R-mode by default during evaluation." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.274, + 0.828, + 0.445 + ], + "angle": 0, + "content": "Generative WQRM We finetune generative transformer architectures by converting classification and regression tasks to sequence-to-sequence problems using JSON output format (Table 5). We employ QLora (Dettmers et al., 2023) parameter-efficient tuning with FSDP (Zhao et al., 2023) and cross-entropy loss. Generative methods can produce natural-language rationales alongside predictions for interpretability. Wiegrefe et al. (2020) demonstrated label-rationale association as essential for response faithfulness, while (Ludan et al., 2023; Hase & Bansal, 2021) argued for incorporating explanations in model input/output to improve robustness against spurious cues. Since LAMP lacks expert rationales, we augment it with LLM-generated silver rationales. We collected five examples from professional writers showing either paragraph strength contrasts (P-style) or holistic critiques/praise (R-style), instructing them to cite specific excerpts. These expert rationales serve as demonstrations for Claude3.5 Sonnet3 to generate rationales (examples in Table 6, Appendix A.3)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.449, + 0.828, + 0.562 + ], + "angle": 0, + "content": "The rationale augmentation is then used in two variants, either providing the rationales on the input \\((\\mathrm{IR}\\rightarrow \\mathrm{O})\\), or requiring the generative model to produce the rationale as part of its output \\((\\mathrm{I}\\rightarrow \\mathrm{RO})\\). We note that rationales are not available at test-time, and are only included during training as an augmentation technique. We finetune a total of seven variants, all based on LLama 3.1 70b model: Llama-WQRM-P, Llama-WQRM-R, Llama-WQRM-PR, Llama-WQRM-P-IR \\(\\rightarrow \\mathrm{O}\\) and Llama-WQRM-P-I \\(\\rightarrow \\mathrm{RO}\\), Llama-WQRM-PR-IR \\(\\rightarrow \\mathrm{O}\\) and Llama-WQRM-PR-I \\(\\rightarrow \\mathrm{RO}\\), based on different versions of the training data, and tune hyperparameters by minimizing validation loss." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.572, + 0.495, + 0.59 + ], + "angle": 0, + "content": "4 The Writing Quality Benchmark" + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.605, + 0.499, + 0.688 + ], + "angle": 0, + "content": "
DatasetPair OriginAnnotatorLenN
Art or Artifice\\( \\text{或或}/\\text{或或} \\)Expert1.5-3k144
LAMP-test\\( \\text{或或}/\\text{或或} \\)Expert200-4001,206
Style Mimic\\( \\text{或或} \\)Expert200-400300
Synth. Mirror\\( \\text{或或} \\)Expert200-4001,120
LM Arena\\( \\text{或或} \\)Crowd200-2.5k1,959
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.698, + 0.504, + 0.773 + ], + "angle": 0, + "content": "Table 1: Writing Quality benchmark composition. Pair Origin: evaluated pairs are AI-generated (♂) or human-written (♀); Len: #words in evaluated responses; N: total evaluation pairs contributed to the benchmark." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.588, + 0.827, + 0.797 + ], + "angle": 0, + "content": "We create the first benchmark centered on the task of writing quality assessment by collecting five relevant datasets and standardizing their data formats into a pairwise preference task. The task in the benchmark consists of a writing instruction and two writing responses, with a binary label indicating which of the two responses has higher writing quality. Table 1 lists the five datasets we selected for the benchmark, along with key properties of each dataset that lead to a comprehensive benchmark for writing quality. We include three datasets that involve AI-AI comparisons (Art or Artifice (Chakrabarty et al., 2024a), LAMP-test" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.796, + 0.825, + 0.868 + ], + "angle": 0, + "content": "(Chakrabarty et al., 2024b), and LM Arena (Zheng et al., 2023)), three that involve AI-Human comparisons (Art or Artifice, LAMP-test, and Synthetic Mirror), and one that involves Human-Human comparisons (Style Mimic) (Anonymous, 2025). This diversity ensures that models that perform well on the benchmark can judge writing quality regardless of whether the response was LLM generated or human-written." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.873, + 0.825, + 0.903 + ], + "angle": 0, + "content": "To assess writing quality prior work has argued for evaluation by professionals (ones with writing experience). Nevertheless, some writing quality preference datasets are based on" + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.91, + 0.725, + 0.926 + ], + "angle": 0, + "content": "3Considered a top-performing model for writing tasks at the time of experiments." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.505, + 0.96 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.231 + ], + "angle": 0, + "content": "crowd-sourced judgments. We include four datasets based on expert judgments and one dataset based on crowd-sourced annotation (LM Arena) to represent both perspectives in the benchmark. Finally, we selected two datasets with long responses (Art or Artifice, LM Arena) and three with shorter responses ranging from 200-400 words, ensuring that models that perform well on the benchmark are capable of judging writing quality irrespective of length. Appendix A.4 details the procedure we followed to extract and standardize each dataset. Appendix A.5 provides an analysis we conducted on the relative difficulty of each dataset in the benchmark, finding that the five selected datasets provide a breadth of coverage in terms of difficulty." + }, + { + "type": "table_caption", + "bbox": [ + 0.434, + 0.231, + 0.642, + 0.245 + ], + "angle": 0, + "content": "Writing Quality Benchmark" + }, + { + "type": "table", + "bbox": [ + 0.238, + 0.245, + 0.758, + 0.504 + ], + "angle": 0, + "content": "
ModelSynthetic MirrorArt or ArtificeLAMPStyle MimicLM ArenaOverall (↑) All
MIRRMIRR/MIRRMIRR/MIRRMIRRMIRRAll
MBERT-WQRM-PR99.880.672.667.351.074.3
MBERT-WQRM-R100.080.676.159.351.073.4
MBERT-WQRM-P99.554.271.267.046.867.7
Llama3.1 - P - IR → O100.080.574.943.052.870.2
Llama3.1 - PR - IR → O99.669.473.754.350.169.4
Llama3.1 - PR - I → OR99.176.371.742.655.268.9
Llama3.1 - P - I → OR99.975.174.138.649.167.3
Llama3.1 (70b) - PR94.852.071.340.644.360.6
Llama3.1 (70b) - P88.145.171.735.647.757.6
Llama3.1 (70b) - R44.850.040.350.054.347.9
Pangram100.072.656.547.348.465.0
O367.785.441.467.559.664.3
Skywork-8B-v0.290.368.154.234.055.860.5
GPT-4o (5FS)39.568.840.367.355.554.3
O125.867.439.868.756.751.7
DeepSeek-r131.554.939.247.357.046.0
GPT-4o7.556.237.847.755.440.9
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.514, + 0.828, + 0.547 + ], + "angle": 0, + "content": "Table 2: Writing Quality Benchmark results. We evaluate zero-shot and few-shot LLMs, generic reward models, AI-detection models, and our fine-tuned models." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.558, + 0.431, + 0.574 + ], + "angle": 0, + "content": "4.1 Experimental Results on WQ" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.584, + 0.827, + 0.755 + ], + "angle": 0, + "content": "Our experiments on the WQ benchmark include four classes of models. First, Zero-Shot (ZS) and Few-Shot (FS) methods with top-performing instruction-tuned LLMs. We included both non-reasoning (GPT-4o) and reasoning models (Deepseek-R1, O1). Second, a top-performing generic reward model - SkyWork-8b-v0.2 - based on results on the RewardBench leaderboard (Lambert et al., 2024). Third, we include the Pangram AI-detector \\(^4\\), accessed through API. Finally, the trained WQRM models in generative and encoder-only settings as described in Section 3. Models that can produce pairwise judgments (such as SkyWork or WQRM-P models) were used as is, but for models that produce scalar rewards (WQRM-R, Pangram), a scalar reward was computed for each response, and inequality was applied to emit a pairwise preference. Scalar rewards can theoretically lead to a tie (a score difference of less than an epsilon like 0.001), but we observe few of these in practice (less than \\(0.1\\%\\) of pairs), and resolve those randomly." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.759, + 0.827, + 0.903 + ], + "angle": 0, + "content": "Experimental results are summarized in Table 2. First, we find that all the LLMs used in zero-shot settings perform below or a few percentage points above a random baseline of \\(50\\%\\). The performance is particularly low on portions of WQ that involve AI-human preference pairs. This confirms prior findings that LLMs used in LLM-as-a-judge settings tend to prefer AI-generation over human-writing (Panickssery et al., 2024). The O1 and R1 reasoning models do not significantly outperform their non-reasoning counterparts, indicating that out-of-the-box COT-style reasoning, useful for math or coding tasks doesn't improve writing quality assessment. O3 shows improvement on Synthetic Mirror and Art or Artifice showing some promise. Finally, adding five few-shot examples to GPT-4o does help improve performance from 40.9 to 54.3, however further experiments with additional" + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.91, + 0.519, + 0.924 + ], + "angle": 0, + "content": "4https://www.pangram.com/dashboard?type \\(\\equiv\\) text" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.135 + ], + "angle": 0, + "content": "in-context examples did not lead to further gains, confirming that few-shot examples in the instruction are not sufficient to achieve strong performance on WQ." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.139, + 0.827, + 0.21 + ], + "angle": 0, + "content": "The generic reward model – Skywork-8b-v0.2 – achieves an overall accuracy of 60.5, with strong performance on Synthetic Mirror and Art or Artifice. Though better than random, the overall performance is much lower than the \\(93\\%\\) performance the model achieves on RewardBench, indicating that reward models geared for instruction-following evaluation are not effective at writing quality assessment out-of-the-box." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.216, + 0.828, + 0.302 + ], + "angle": 0, + "content": "The Pangram AI detection system achieves a total performance of \\(65.0\\%\\), the top performance for untrained models. Pangram achieves near-perfect performance on Synthetic Mirror and the AI-Human pairs of Art or Artifice. On samples that do not involve distinguishing between AI and human text, Pangram achieves near-random performance. In other words, AI-detection tools only correlate with writing quality assessment when an AI-generated text is judged to be worse than human-written text." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.307, + 0.828, + 0.42 + ], + "angle": 0, + "content": "Finally, the trained WQRM models achieve top-performance on the benchmark. The Llama-based models achieve their strongest performance in the \\(\\mathrm{IR} \\rightarrow \\mathrm{O}\\) settings, confirming that augmenting the training data with rationales is beneficial, with models that can generate rationales alongside their prediction. The ModernBERT-based models achieve the highest overall accuracy of \\(74.3\\%\\), with the PR variant outperforming the P and R models, indicating that pairwise and reward-based training can be complementary. While its surprising to see a smaller model outperform Llama3.1-70B it could be due to PEFT or the way the loss function is optimized. Future work can focus on bridging this gap." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.425, + 0.829, + 0.552 + ], + "angle": 0, + "content": "We observe that generative WQRM models perform best in P-mode, whereas encoder models perform best in R-mode. We emit a hypothesis for this reversal of relationship, related to the choice of loss. The generative models (Llama) are trained with a sequence-to-sequence loss, whereas the encoder-only models (MBert) are trained with custom losses (pairwise classification for P, mean-squared error for R). In other words, LLama training on the reward-based data is more similar to 10-way classification than actual score regression, whereas the MBert training makes better use of the reward-based data. This leads the MBERT-R models to outperform MBert-P models, whereas the reverse is true for the LLama models, as they are not able to properly take advantage of the R-based data." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.557, + 0.829, + 0.74 + ], + "angle": 0, + "content": "Looking at performance on individual datasets, Synthetic Mirror is the the easiest dataset, with eight models achieving near-perfect performance. Some models achieve \\(80\\%+\\) performance on Art or Artifice, indicating that long-context evaluation is challenging but achievable. Style Mimic and LM Arena are the most challenging in terms of accuracy. Style Mimic is likely challenging as it is the only dataset that involves comparisons that do not involve AI-generated text, but two relatively high-quality human-written candidates. LM Arena is challenging to all systems, with top performance at \\(57\\%\\) by Deepseek-R1. This low performance could be due to the crowd-sourced nature of LM Arena, with the dataset representing much broader and potentially noisier judgments. Though our trained WQRM models outperform baselines by almost \\(10\\%+\\) overall, there remains wide room for improvement: writing quality assessment remains an open challenge to the community. Additional analysis in upcoming Sections refers to the top-performing model - MBERT-WQRM-PR - simply as WQRM." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.749, + 0.58, + 0.768 + ], + "angle": 0, + "content": "5 Editing Pipeline with Test-Time Compute" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.781, + 0.828, + 0.852 + ], + "angle": 0, + "content": "To better understand the practical value of the WQRM model, we integrate it into a text-editing pipeline to produce LLM-generated candidates of higher-quality according to WQRM scores. We first introduce the editing pipeline and candidate generation procedure, and then describe the large-scale preference annotation we conducted with professional writers to validate WQRM as part of an editing pipeline." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.856, + 0.542, + 0.872 + ], + "angle": 0, + "content": "5.1 Generating edits via Supervised Finetuning" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.882, + 0.829, + 0.927 + ], + "angle": 0, + "content": "Prior work from Chakrabarty et al. (2024b) shows experimentally that LLMs' text idiosyncrasies (cliches, redundancy, lack of subtext, etc.) can be mitigated through self-editing in an in-context setup. Borrowing motivation from them we teach LLMs how to improve" + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.288 + ], + "angle": 0, + "content": "their response via edits. Figure 6 illustrates the three components of the editing pipeline. Given a first draft response to an instruction from any given LLM, the first step consists of identifying and listing idiosyncrasies: spans in the first draft that can be rephrased to improve overall writing quality. For each identified idiosyncrasy, a second stage consists in rewriting the idiosyncrasy. This is framed as an executable edit (Laban et al., 2023), where each edit consists of replacing an original string in a draft with an improved version. The third step simply executes all edits (by applying a series of string replace operations) to obtain the final edited draft. While Chakrabarty et al. (2024b) implemented this through prompt-chaining (Wu et al., 2022) with few-shot examples, we improved efficiency by supervised fine-tuning of GPT-4o and Llama3.1 70B based on the entire LAMP training set. The training input consists of the first draft alongside the entire edit interaction trace (detect, rewrite, execute) in a step-by-step chain of thought prompt, and the output is the edited paragraph. See Appendix A.7 for an example COT prompt." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.291, + 0.663, + 0.307 + ], + "angle": 0, + "content": "5.2 Selecting edited response by leveraging Test-Time Compute" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.308, + 0.827, + 0.465 + ], + "angle": 0, + "content": "Recent work from Snell et al. (2024) shows that test-time compute can be scaled optimally by using a reward model to search over the space of solutions. This approach typically involves generating multiple candidate responses and using a verifier to select an optimal response (Cobbe et al., 2021). The most popular technique to increase test-time compute is Best-of-N sampling also known as Rejection Sampling, in which N candidates are generated independently. The reward model is then used to score each candidate, and the top-scoring candidate is selected. While test-time scaling is effective for reasoning tasks, our work aims to measure whether it is a practical strategy to improve human-AI alignment in subjective tasks such as writing. Next we describe the validation study with experts to measure how well calibrated our WQRMs are to human judgment and whether additional test-time computation leads to meaningful improvements in AI writing quality." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.472, + 0.607, + 0.487 + ], + "angle": 0, + "content": "6 How well calibrated are our reward models?" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.492, + 0.828, + 0.735 + ], + "angle": 0, + "content": "We generated 100 draft responses (50 GPT4-o, 50 Llama3.1 70B) based on 90 writing instructions spanning 3 domains: literary fiction, non-fiction, and product marketing. For literary fiction and non-fiction we create the instructions through instruction back-translation (Li et al., 2023) conditioned on expert-written paragraphs in Anonymous (2025) and news articles in the data from Russell et al. (2025). Marketing writing instructions were based on products recommended in WireCutter articles across the Home, Kitchen and Tech sections. The right portion of Figure 1 summarizes the process we follow to leverage test-time compute. Specifically, we obtain a first draft from a LLM (GPT4o or Llama3.1 70B) followed by drawing \\( N = 20 \\) candidate edited responses from the respective SFT model (Section 5.1)6, and score each candidate with the WQRM model. We filter out any candidate that scores lower than the first drafts, and then form response triplets by selecting the first draft, a randomly-selected edited response (random edit), and the Best-of-N candidate response according to WQRM (Best Edit) (See example triplet in Table 9). We recruited 9 professional writers through mailing lists from top MFA programs in the US. They were asked to rank three responses based on its overall quality (See Figure 8 for interface). Each response triplet were annotated by three experts, which we aggregated into a majority rank. Participants completed annotation in batches of 10 triplets at a time, and were paid $100 per batch." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.739, + 0.33, + 0.755 + ], + "angle": 0, + "content": "6.1 Study Findings" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.765, + 0.828, + 0.85 + ], + "angle": 0, + "content": "Figure 3 summarizes findings from the expert annotation. In Figure 3a, we plot the distribution of rankings across all triplets. Best Edit candidates were most preferred overall with an average rank of 1.58, followed by random edit (2.09) and first draft (2.26). The breakdown of rankings across domains (fiction, non-fiction, marketing) or LLM (GPT-4o vs. Llama 3.1) is presented in Appendix A.8. In short, Best Edit achieves the top rank in all conditions, confirming the generalization of WQRM scores across conditions." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.855, + 0.828, + 0.887 + ], + "angle": 0, + "content": "If the reward model is well-calibrated, the WQRM score gap between responses should indicate their qualitative difference. For example, responses scoring 4 and 6 should have a larger" + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.896, + 0.465, + 0.911 + ], + "angle": 0, + "content": "5https://www.nytimes.com/wirecutter/" + }, + { + "type": "page_footnote", + "bbox": [ + 0.193, + 0.911, + 0.545, + 0.923 + ], + "angle": 0, + "content": "6If first draft is from GPT4o we use GPT4o SFT model" + }, + { + "type": "list", + "bbox": [ + 0.191, + 0.896, + 0.545, + 0.923 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.182, + 0.105, + 0.443, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.208, + 0.249, + 0.414, + 0.263 + ], + "angle": 0, + "content": "(a) Expert Ranking Distribution" + }, + { + "type": "image", + "bbox": [ + 0.455, + 0.104, + 0.633, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.47, + 0.249, + 0.62, + 0.264 + ], + "angle": 0, + "content": "(b) Gap vs. Agreement" + }, + { + "type": "image", + "bbox": [ + 0.645, + 0.105, + 0.821, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.657, + 0.249, + 0.807, + 0.264 + ], + "angle": 0, + "content": "(c) Sensitivity Analysis" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.274, + 0.825, + 0.318 + ], + "angle": 0, + "content": "Figure 3: Results and analysis of WQRM based: (a) distribution of preference based on 300 expert triplet rankings, (b) calibration between gap in WQRM scores and matching expert preference, and (c) applying experts edits gradually to a draft leads to gradual reward gains." + }, + { + "type": "image", + "bbox": [ + 0.179, + 0.347, + 0.496, + 0.557 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.205, + 0.566, + 0.471, + 0.581 + ], + "angle": 0, + "content": "(a) Less content detail in writing prompt" + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.348, + 0.822, + 0.557 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.538, + 0.566, + 0.81, + 0.581 + ], + "angle": 0, + "content": "(b) More content detail in writing prompt" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.591, + 0.825, + 0.636 + ], + "angle": 0, + "content": "Figure 4: Writing quality analysis of human-written and LLM-generated texts according to WQRM on (a) less and (b) more content detail in the writing prompt. Prompts with less content detail average 30 words, whereas prompts with more content detail average 180." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.653, + 0.827, + 0.781 + ], + "angle": 0, + "content": "quality gap than those scoring 4 and 4.5. To inspect WQRM calibration, we computed the WQRM gap between all annotated response pairs and plotted it against expert annotation agreement. As shown in Figure 3b, WQRM gap positively correlates with expert agreement: when responses differ by \\(\\leq 0.5\\) points, individual experts prefer the higher-scoring response only \\(55\\%\\) of the time. When the gap exceeds 3.0, this increases to \\(80\\%\\). Agreement with majority rank based on three expert annotations (green line) shows even stronger positive correlation. In short, we find evidence that WQRM is well-calibrated: a wider gap in scores between two responses is evidence that an expert (or group of experts) would be more likely to prefer the higher-scoring response over the lower-scoring response." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.786, + 0.827, + 0.938 + ], + "angle": 0, + "content": "Besides calibration, we analyze the sensitivity of the WQRM model to minor edits and their impact on writing quality. The LAMP dataset consists of drafts that are edited by expert writers to improve writing, with samples comprising of eight edits per passage on average. We implement a gradual version of the LAMP-test set, where each expert edit is reversed, and we execute them one at a time, computing the WQRM score at each intermediate step. Results from the gradual LAMP-test are summarized in Figure 3c: each time an additional edit is implemented, the median WQRM score increases by 0.2, even though WQRM was not trained on intermediate responses and only saw samples where no edit or all edits have been applied. In summary, we find evidence that minor edits to a response will lead to small but significant changes in WQRM scores, indicative of a fine sensitivity of the reward model." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.504, + 0.959 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.102, + 0.572, + 0.121 + ], + "angle": 0, + "content": "7 How does content affect writing quality?" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.133, + 0.828, + 0.314 + ], + "angle": 0, + "content": "Effectively judging writing quality impacts both understanding and improving LLM writing. Writing quality is however closely tied to content. Its known that LLMs struggle with novel ideas (content planning), making their writing appear trite. Even with detailed original content, they struggle to maintain good writing standards (avoiding clichés, revealing subtext, and introducing purple prose). To understand how content affects writing quality, we analyzed writing from several LLMs with and without detailed content. We used 50 writing instructions from Style Mimic data, creating two variants: a 30-word prompt with less detail (e.g., \"A family Christmas unfolds through emotional reflections on a father's new family, a daughter's excuse to stay behind, and the complex dynamics of grief and blended identities.\") and a 150-200 word detailed prompt (Table 10 in Appendix). Style Mimic provides an original excerpt from an award-winning author and an MFA student's attempt to mimic that style for each prompt. Each sample includes the detailed content used for 4b." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.321, + 0.828, + 0.476 + ], + "angle": 0, + "content": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, we retrained a better calibrated reward model with few fully human written high quality text (See Appendix A.11 for more details). Figure 4a shows writing quality scores from the WQRM model when prompts lack detailed content. Award-winning authors achieve a median score of 8.9, while LLMs score 4.8-6.6 with much higher variance. Despite WQRM being trained only on AI-generated paragraphs edited by MFA students and relatively fewer human written samples, it scored 50 author-written texts higher than all LLMs, demonstrating model generalization. GPT4.5, though considered the best writing LLM, showed no quality advantage. The significant gap between awardwinning authors and LLMs shows that in the absence of original good-quality content, all LLMs are poor writers." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.481, + 0.828, + 0.663 + ], + "angle": 0, + "content": "Figure 4b shows the writing quality of several LLMs leveraging the new WQRM model when detailed content is provided in the writing prompt. As a matter of fact the content detail is often \\(0.5\\mathrm{x}\\) to \\(0.75\\mathrm{x}\\) times the word count of the paragraph to be written/generated. Results with the detailed prompts provide additional insights. Though the variance remains high for all models, the more recent models (GPT-4.5, Claude 3.7-Sonnet, Gemini-2.5-pro) achieve improved writing quality given the more detailed prompts, achieving median scores of around 7.0. This should not be surprising as the amount of details provided in the writing prompt reduces the burden for originality and novelty from the LLM. What is particularly impressive here is paragraphs written by MFA students based on the same detailed content were rated significantly higher than all LLMs with a median of 8.6. The gap between award-winning authors and MFA students is narrow here, although the distribution from MFA students shows higher variance. Our results highlight that even when provided with very detailed original content, LLMs are far behind trained writers." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.668, + 0.829, + 0.727 + ], + "angle": 0, + "content": "In summary, the analysis reveals that current LLMs are not yet capable of reliably generating high-quality creative writing at the level of an MFA student or award-winning author, especially when not spooned with original content. When provided with enough content detail in the prompt, the latest models show promise but still remain unreliable." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.745, + 0.309, + 0.759 + ], + "angle": 0, + "content": "8 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.758, + 0.829, + 0.926 + ], + "angle": 0, + "content": "In this work, we introduced the Writing Quality benchmark (WQ) and Writing Quality Reward Models (WQRM) to address the critical challenge of evaluating and improving the quality of AI-generated text. Our models trained on implicit preference via edits significantly outperform existing approaches, achieving \\(74\\%\\) accuracy on the WQ benchmark and demonstrating strong generalization across diverse writing contexts, as confirmed by a validation study involving 9 professional writers. Future work can address alternative test time computation such as long chains-of-thought (CoTs) enabling strategies like backtracking and correction of idiosyncrasies for improving writing. While our approach improves AI generated text by reducing idiosyncrasies, it is no where near expert quality writing. However, we hope that our contributions can serve as a catalyst for further research in writing quality assessment and the development of AI writing systems that are more aligned with human preferences." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.103, + 0.275, + 0.118 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.128, + 0.826, + 0.171 + ], + "angle": 0, + "content": "Barrett R Anderson, Josh Hemant Shah, and Max Kreminski. Homogenization effects of large language models on human creative ideation. In Proceedings of the 16th Conference on Creativity & Cognition, pp. 413-425, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.182, + 0.825, + 0.21 + ], + "angle": 0, + "content": "Anonymous. Literary voice reproduction study mfa writers vs. llms in authorial style. In Under Submission, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.221, + 0.826, + 0.264 + ], + "angle": 0, + "content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.275, + 0.826, + 0.302 + ], + "angle": 0, + "content": "Deborah Brandt. The rise of writing: Redefining mass literacy. Cambridge University Press, 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.315, + 0.826, + 0.371 + ], + "angle": 0, + "content": "Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.382, + 0.826, + 0.452 + ], + "angle": 0, + "content": "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. Art or artifice? large language models and the false promise of creativity. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024a. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642731. URL https://doi.org/10.1145/3613904.3642731." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.463, + 0.826, + 0.506 + ], + "angle": 0, + "content": "Tuhin Chakrabarty, Philippe Laban, and Chien-Sheng Wu. Can ai writing be salvaged? mitigating idiosyncrasies and improving human-ai alignment in the writing process through edits. arXiv preprint arXiv:2409.14509, 2024b." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.517, + 0.826, + 0.587 + ], + "angle": 0, + "content": "Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, and Smaranda Muresan. Creativity support in the age of large language models: An empirical study involving professional writers. In Proceedings of the 16th Conference on Creativity & Cognition, C & C '24, pp. 132-155, New York, NY, USA, 2024c. Association for Computing Machinery. ISBN 9798400704857. doi: 10.1145/3635636.3656201. URL https://doi.org/10.1145/3635636.3656201." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.598, + 0.826, + 0.654 + ], + "angle": 0, + "content": "Yinlam Chow, Guy Tennenholtz, Izzeddin Gur, Vincent Zhuang, Bo Dai, Sridhar Thiagarajan, Craig Boutilier, Rishabh Agarwal, Aviral Kumar, and Aleksandra Faust. Inference-aware fine-tuning for best-of-n sampling in large language models. arXiv preprint arXiv:2412.15287, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.666, + 0.826, + 0.709 + ], + "angle": 0, + "content": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.72, + 0.826, + 0.763 + ], + "angle": 0, + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168, 9, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.773, + 0.826, + 0.815 + ], + "angle": 0, + "content": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, 36:10088-10115, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.827, + 0.826, + 0.924 + ], + "angle": 0, + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423/." + }, + { + "type": "list", + "bbox": [ + 0.175, + 0.128, + 0.826, + 0.924 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.507, + 0.96 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.828, + 0.147 + ], + "angle": 0, + "content": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.827, + 0.185 + ], + "angle": 0, + "content": "Bradley Emi and Max Spero. Technical report on the pangram ai-generated text classifier. arXiv preprint arXiv:2402.14873, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.193, + 0.825, + 0.224 + ], + "angle": 0, + "content": "Yang Gao, Dana Alon, and Donald Metzler. Impact of preference noise on the alignment performance of generative language models. arXiv preprint arXiv:2404.09824, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.231, + 0.825, + 0.275 + ], + "angle": 0, + "content": "Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. Sparks: Inspiration for science writing using language models. In Proceedings of the 2022 ACM Designing Interactive Systems Conference, pp. 1002-1019, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.282, + 0.825, + 0.311 + ], + "angle": 0, + "content": "Sian Gooding, Lucia Lopez-Rivilla, and Edward Grefenstette. Writing as a testbed for open ended agents, 2025. URL https://arxiv.org/abs/2503.19711." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.32, + 0.826, + 0.363 + ], + "angle": 0, + "content": "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.371, + 0.827, + 0.414 + ], + "angle": 0, + "content": "Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, et al. Which economic tasks are performed with ai? evidence from millions of claude conversations." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.422, + 0.827, + 0.464 + ], + "angle": 0, + "content": "Peter Hase and Mohit Bansal. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.473, + 0.825, + 0.504 + ], + "angle": 0, + "content": "John R Hayes, Linda Flower, Karen A Schriver, James Stratman, Linda Carey, et al. Cognitive processes in revision. Advances in applied psycholinguistics, 2:176-240, 1987." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.512, + 0.826, + 0.541 + ], + "angle": 0, + "content": "John Herrman. Is that ai? or does it just suck? New York Magazine, 2024a. URL https://nymag.com/intelligencer/article/is-that-ai-or-does-it-just-suck.html." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.55, + 0.826, + 0.592 + ], + "angle": 0, + "content": "John Herrman. The internet's ai slop problem is only going to get worse. New York Magazine - Intelligencer, 2024b. URL https://nymag.com/intelligencer/article/ai-generated-content-online-slop-spam.html. Accessed: 2025-03-06." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.601, + 0.825, + 0.643 + ], + "angle": 0, + "content": "Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.652, + 0.825, + 0.694 + ], + "angle": 0, + "content": "Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. Creative writing with an ai-powered writing assistant: Perspectives from professional writers. arXiv preprint arXiv:2211.05030, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.704, + 0.827, + 0.745 + ], + "angle": 0, + "content": "Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Jia Xu, Linjian Mo, and Min Zhang. Test-time computing: from system-1 thinking to system-2 thinking. arXiv preprint arXiv:2501.02497, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.755, + 0.827, + 0.784 + ], + "angle": 0, + "content": "Kate Knibbs. Confessions of an ai clickbait kingpin. Wired, 2024a. URL https://www.wired.com/story/confessions-of-an-ai-clickbait-kingpin/. Accessed: 2025-03-07." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.793, + 0.827, + 0.834 + ], + "angle": 0, + "content": "Kate Knibbs. Scammy ai-generated books are flooding amazon. Wired, 2024b. URL https:// www.wired.com/story/scammy-ai-generated-books-flooding-amazon/. Accessed: 2025- 03-07." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.844, + 0.825, + 0.873 + ], + "angle": 0, + "content": "Kate Knibbs. Ai slop is flooding medium. Wired, 2024c. URL https://www.wired.com/story/ai-generated-medium-posts-content-moderation/. Accessed: 2025-03-06." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.882, + 0.827, + 0.924 + ], + "angle": 0, + "content": "Kate Knibbs. Some of substack's biggest newsletters rely on ai writing tools. Wired, 2024d. URL https://www.wired.com/story/substacks-writers-use-ai-chatgpt/. Accessed: 2025-03-07." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.924 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.829, + 0.147 + ], + "angle": 0, + "content": "Dmitry Kobak, Rita González-Márquez, Emőke-Ágnes Horvát, and Jan Lause. Delving into chatgpt usage in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.827, + 0.2 + ], + "angle": 0, + "content": "Philippe Laban, Jesse Vig, Marti A Hearst, Caiming Xiong, and Chien-Sheng Wu. Beyond the chat: Executable and verifiable text-editing with llms. arXiv preprint arXiv:2309.15337, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.209, + 0.825, + 0.24 + ], + "angle": 0, + "content": "Nathan Lambert and Roberto Calandra. The alignment ceiling: Objective mismatch in reinforcement learning from human feedback. arXiv preprint arXiv:2311.00168, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.248, + 0.827, + 0.293 + ], + "angle": 0, + "content": "Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, et al. Rewardbench: Evaluating reward models for language modeling. arXiv preprint arXiv:2403.13787, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.3, + 0.827, + 0.328 + ], + "angle": 0, + "content": "Timothy Laquintano and Annette Vee. Ai and the everyday writer. PMLA, 139(3):527-532, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.339, + 0.827, + 0.396 + ], + "angle": 0, + "content": "Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. Rlaif vs. rlhf: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.405, + 0.825, + 0.449 + ], + "angle": 0, + "content": "Jinsook Lee, A. J. Alvero, Thorsten Joachims, and René F. Kizilcec. Poor alignment and steerability of large language models: Evidence from college admission essays. 2025. URL https://apisemantic scholar.org/CorpusID:277321621." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.457, + 0.825, + 0.516 + ], + "angle": 0, + "content": "Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A Alghamdi, et al. A design space for intelligent and interactive writing assistants. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-35, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.525, + 0.827, + 0.568 + ], + "angle": 0, + "content": "Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.577, + 0.825, + 0.648 + ], + "angle": 0, + "content": "Zhuoyan Li, Chen Liang, Jing Peng, and Ming Yin. The value, benefits, and concerns of generative ai-powered assistance in writing. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642625. URL https://doi.org/10.1145/3613904.3642625." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.657, + 0.827, + 0.701 + ], + "angle": 0, + "content": "Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, et al. Mapping the increasing use of llms in scientific papers. arXiv preprint arXiv:2404.01268, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.71, + 0.827, + 0.754 + ], + "angle": 0, + "content": "Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. The widespread adoption of large language model-assisted writing across society. arXiv preprint arXiv:2502.09747, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.762, + 0.827, + 0.807 + ], + "angle": 0, + "content": "Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.814, + 0.827, + 0.859 + ], + "angle": 0, + "content": "Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.868, + 0.827, + 0.923 + ], + "angle": 0, + "content": "Zhexiong Liu, Diane Litman, Elaine Wang, Tianwen Li, Mason Gobat, Lindsay Clare Matsumura, and Richard Correnti. erevise+ rf: A writing evaluation system for assessing student essay revisions and providing formative feedback. arXiv preprint arXiv:2501.00715, 2025." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.923 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.148 + ], + "angle": 0, + "content": "Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Explanation-based finetuning makes models more robust to spurious cues. arXiv preprint arXiv:2305.04990, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.826, + 0.2 + ], + "angle": 0, + "content": "Guillermo Marco, Julio Gonzalo, Ramón del Castillo, and María Teresa Mateo Girona. Pron vs prompt: Can large language models already challenge a world-class fiction author at creative text writing? arXiv preprint arXiv:2407.01119, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.207, + 0.829, + 0.279 + ], + "angle": 0, + "content": "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394215. doi: 10.1145/3544548.3581225. URL https://doi.org/10.1145/3544548.3581225." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.286, + 0.829, + 0.346 + ], + "angle": 0, + "content": "Piotr Mirowski, Juliette Love, Kory Mathewson, and Shakir Mohamed. A robot walks into a bar: Can language models serve as creativity supporttools for comedy? an evaluation of llms' humour alignment with comedians. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1622-1636, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.353, + 0.826, + 0.382 + ], + "angle": 0, + "content": "OpenAI. Introducing openai o1 preview. https://openai.com/index/introducing-openai-o1-preview/, 2024. Accessed: 2025-03-20." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.39, + 0.829, + 0.448 + ], + "angle": 0, + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.456, + 0.826, + 0.485 + ], + "angle": 0, + "content": "Vishakh Padmakumar and He He. Does writing with language models reduce content diversity? arXiv preprint arXiv:2309.05196, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.494, + 0.826, + 0.524 + ], + "angle": 0, + "content": "Jane Pan, He He, Samuel R Bowman, and Shi Feng. Spontaneous reward hacking in iterative self-refinement. arXiv preprint arXiv:2407.04549, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.532, + 0.829, + 0.575 + ], + "angle": 0, + "content": "Arjun Panickssery, Samuel Bowman, and Shi Feng. Llm evaluators recognize and favor their own generations. Advances in Neural Information Processing Systems, 37:68772-68802, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.584, + 0.826, + 0.627 + ], + "angle": 0, + "content": "Jenna Russell, Marzena Karpinska, and Mohit Iyyer. People who frequently use chatgpt for writing tasks are accurate and robust detectors of ai-generated text. arXiv preprint arXiv:2501.15654, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.635, + 0.829, + 0.667 + ], + "angle": 0, + "content": "Chantal Shaib, Yanai Elazar, Junyi Jessy Li, and Byron C Wallace. Detection and measurement of syntactic templates in generated text. arXiv preprint arXiv:2407.00211, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.674, + 0.829, + 0.716 + ], + "angle": 0, + "content": "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.726, + 0.829, + 0.77 + ], + "angle": 0, + "content": "Tianchun Wang, Yanzhou Chen, Zichuan Liu, Zhanwen Chen, Haifeng Chen, Xiang Zhang, and Wei Cheng. Humanizing the machine: Proxy attacks to mislead llm detectors. arXiv preprint arXiv:2410.19230, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.777, + 0.829, + 0.836 + ], + "angle": 0, + "content": "Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.844, + 0.826, + 0.873 + ], + "angle": 0, + "content": "Sarah Wiegrefe, Ana Marasovic, and Noah A Smith. Measuring association between labels and free-text rationales. arXiv preprint arXiv:2010.12762, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.881, + 0.826, + 0.926 + ], + "angle": 0, + "content": "Tongshuang Wu, Michael Terry, and Carrie Jun Cai. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems, pp. 1-22, 2022." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.148 + ], + "angle": 0, + "content": "Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.825, + 0.199 + ], + "angle": 0, + "content": "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. Wordcraft: story writing with large language models. In Proceedings of the 27th International Conference on Intelligent User Interfaces, pp. 841-852, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.206, + 0.825, + 0.251 + ], + "angle": 0, + "content": "Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, et al. Pytorch fsdp: experiences on scaling fully sharded data parallel. arXiv preprint arXiv:2304.11277, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.258, + 0.825, + 0.302 + ], + "angle": 0, + "content": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P Xing, et al. Lmsys-chat-lm: A large-scale real-world llm conversation dataset. arXiv preprint arXiv:2309.11998, 2023." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.826, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.102, + 0.303, + 0.121 + ], + "angle": 0, + "content": "A Appendix" + }, + { + "type": "table", + "bbox": [ + 0.186, + 0.151, + 0.812, + 0.415 + ], + "angle": 0, + "content": "
Which is better quality writing?
A: One evening – it was our last day – in Tours, we had dinner in a brightly-lit restaurant where the walls were lined with mirrors, frequented by a sophisticated clientele. [...] made with “pig slop,” white and tasteful. [...] expressing resentment at having been treated with contempt because we were not chic customers who ate “à la carte.”Paragraph B is of higher quality\nConciseness and Clarity: Paragraph B is more streamlined[...] tighter and more focused.\nEmotional Depth: Paragraph B captures the speaker's evolving realization [...] making the writing more impactful.\nImagery and Language: The use of phrases [...] adds nuance and emotional weight. In contrast, Paragraph A's descriptions is literal and lack [...]
B: We sat at the group table, my father and I, our reflections dim in the restaurant's mirror. The waitress barely glanced our way. I adjusted my glasses, [...] I realize now that what I witnessed that day was not just a meal but the precise moment when social class became visible to me, manifested in a pot of yogurt, in the waitress's averted eyes, [...]
Overall, Paragraph B delivers a more poignant [...]
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.424, + 0.825, + 0.454 + ], + "angle": 0, + "content": "Table 3: LLM preferring its own writing (Paragraph B) over Paragraph A written by Annie Ernaux, 2022 Nobel Prize in Literature." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.491, + 0.396, + 0.508 + ], + "angle": 0, + "content": "A.1 Example LAMP Sample" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.52, + 0.825, + 0.548 + ], + "angle": 0, + "content": "Table 4 shows an example from LAMP data with writing prompt and response before and after edit." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.565, + 0.839, + 0.801 + ], + "angle": 0, + "content": "
PromptWhat happened while you were sitting in your car, waiting for a call, amidst the torrential rain and chaotic scene in La Conchita?
Edited Resp.As I sat in my car, the rain drummed against the roof, a relentless beat that mirrored the anxiety thrumming through my veins. I was waiting for a call, any call, that would give me a reason to move, to act, to escape the suffocating stillness. La Conchita's streets were a mess. Cars hydroplaning on the flooded roads, People scurrying for cover like ants from a disturbed nest. I watched as a woman, her hair plastered to her face, struggled to free her stuck umbrella, Her eyes darting darted towards me with a fleeting plea for help. I looked away, feeling the weight of my inaction. The rain-soaked world outside seemed to be shrinking, the droplets on my windshield coalescing into tiny mirrors that reflected my own uncertainty. My phone, silent and unyielding, lay on the passenger seat, a-constant reminder of my powerlessness. I thought of all the what ifs, the maybe, the possibilities that hung in the balance. The rain intensified, drumming out a rhythm that seemed to match the beat of my heart. In the chaos, I felt a strange sense of calm, as if the storm was washing away my doubts, leaving only the stark reality of the present. And yet, I remained frozen, waiting for that call, that spark, that would set me in motion.
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.81, + 0.828, + 0.84 + ], + "angle": 0, + "content": "Table 4: An example from LAMP. Text in bold with strikethrough represents edited span. The normalized Initial and Final Writing Quality scores are 3.3 and 7.0 respectively" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.88, + 0.486, + 0.896 + ], + "angle": 0, + "content": "A.2 Generative WRQM Prompt Formats" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.909, + 0.709, + 0.926 + ], + "angle": 0, + "content": "Table 5 shows a P and R style training prompt thats used to train WQRMs" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.101, + 0.849, + 0.389 + ], + "angle": 0, + "content": "
P{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given two paragraphs of writing for a given instruction.\\nYour task is to determine which paragraph is overall better in terms of writing quality.\\nParagraph 1:\\nAfter her father's passing, Marina and her family [......]\\nParagraph 2:\\n[......] had cherished so deeply.\\n\\nYou must produce your answer in the following JSON format:\\n{"preference":"1-2"}\\nwhere 'preference' should be "1" if you think Paragraph 1 is better, "2" if you think Paragraph 2 is better.\\n", "role": "user"}
{"content": {""preference":"2"},{"role": "assistant"}
R{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given a paragraph of creative writing. You must score it on a scale from 1 to 10, where 1 is the lowest quality and 10 is the highest quality.\\nParagraph:\\nThe rich history, and recent revitalisation of Santa Barbara are most clear [......]\\nYou must produce your answer in the following JSON format:\\n{"score": 1}\\nwhere 'score' is an integer between 1 and 10.", "role": "user"}
{"content": {""score":"7"},{"role": "assistant"}
" + }, + { + "type": "table_caption", + "bbox": [ + 0.208, + 0.398, + 0.79, + 0.415 + ], + "angle": 0, + "content": "Table 5: Training prompts in P and R setting for instruction tuning Llama3.1 70B" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.441, + 0.611, + 0.457 + ], + "angle": 0, + "content": "A.3 Example Rationales For Generative WQRM Training" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.468, + 0.825, + 0.511 + ], + "angle": 0, + "content": "Table 6 shows the silver natural language rationales for P and R mode. In P its more contrastive in nature while in R its an absolute critique. Both rationales cites excerpts from the original paragraph." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.526, + 0.86, + 0.774 + ], + "angle": 0, + "content": "
Paragraph 2 is more concise and direct in its storytelling, avoiding the overwrought metaphors and clichéd language found in Paragraph 1. For example, while Paragraph 1 uses flowery phrases like “a delicate bloom unfurling in the wake of a harsh winter” to describe the mother’s recovery, Paragraph 2 simply states “Marina’s mother slowly emerged from her all-consuming grief.” The second paragraph also maintains emotional authenticity without becoming melodramatic. The first paragraph’s phrases like “brick by brick, memory by memory” and “the resilience of the human spirit” feel forced and sentimental, whereas Paragraph 2 conveys the same emotional journey with more restraint and impact. The shorter length of Paragraph 2 also helps maintain the story’s momentum without getting bogged down in unnecessary elaboration.
The paragraph attempts to capture a poignant moment of parent-child separation but relies too heavily on telling rather than showing, with lines like “I felt a pang of guilt only a parent could know” and “I realized I was facing my own reluctance.” The emotional weight of the situation is spelled out rather than revealed through action or specific detail. While the core idea is relatable, the writing lacks distinctive imagery or memorable turns of phrase that would elevate it beyond the obvious. The final metaphor about “running up the charges to fill the space on my lighter bill” feels forced and doesn’t quite land effectively. The narrative maintains a consistent tone but remains in safe, conventional territory without taking any stylistic risks that might make it more compelling.
" + }, + { + "type": "table_caption", + "bbox": [ + 0.252, + 0.785, + 0.744, + 0.802 + ], + "angle": 0, + "content": "Table 6: Natural language rationale for P and R modes respectively" + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.828, + 0.283, + 0.843 + ], + "angle": 0, + "content": "A.4 Datasets" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.854, + 0.828, + 0.927 + ], + "angle": 0, + "content": "Art or Artifice In prior work Chakrabarty et al. (2024a) evaluate writing quality in flash fiction (1,500-2,500 words). The dataset includes 12 writing prompts based on New Yorker stories, each with four responses: the original story plus three LLM-generated versions from GPT-3.5, GPT-4 and Claude v1.3. Three expert annotators ranked all four stories for each prompt, with results aggregated into majority preferences for each story pair. From the 12" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.148 + ], + "angle": 0, + "content": "prompts and all possible response pairs (4C2), the dataset contains 144 preference samples (including both AB and BA orderings). \\(25\\%\\) are Human-AI comparisons, while \\(75\\%\\) are AI-AI comparisons." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.153, + 0.827, + 0.266 + ], + "angle": 0, + "content": "LAMP-test The LAMP corpus (Chakrabarty et al., 2024b) test set focuses on short-form creative writing (200-400 words), including fiction and non-fiction. It contains 201 triplets, each with a writing instruction and three responses: (1) AI-written, (2) AI-written+AI-edited, and (3) AI-written+AI-edited. Three professional writers ranked responses based on subjective preference, with results combined into a majority vote. For each instruction, all 3 possible response pairs were evaluated, creating 1206 total samples (by duplicating each pair in AB and BA order). Of these, \\(33\\%\\) are AI-HumanAI comparisons, and \\(66\\%\\) are AI-AI comparisons." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.271, + 0.828, + 0.399 + ], + "angle": 0, + "content": "Style Mimic In recent work, Anonymous (2025) examined if MFA students could mimic award-winning authors' styles. Specifically, 28 MFA students were first given 20 samples written by an award-winning author (such as Haruki Murakami, Yoko Ogawa, Percival Everett, Zadie Smith, Joan Didion), along with their style verbalized in text. They were then provided with a writing instruction to recreate an original paragraph from the author (typically 200-400 words) while imitating the style of the author to the best of their ability. This data includes 150 sample pairs (student imitation vs. original author response), with the original author's work implicitly preferred. All Mirror Human samples are Human-Human comparisons. Table 7 shows an example." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.403, + 0.829, + 0.558 + ], + "angle": 0, + "content": "Synthetic Mirror Prior work on AI-detection (Emi & Spero, 2024) introduced \"synthetic mirrors,\" a two-step approach to generate writing pairs with implicit preferences. First, an LLM creates a mirror prompt from a human-written sample, extracting a plot summary and structured features (tone, style, length). Second, this prompt produces a synthetic mirror: an AI-generated response resembling the original's content and features. We selected 280 paragraphs from New Yorker flash fiction by award-winning authors (such as Alice Munro, Jhumpa Lahiri, Annie Ernaux etc). After extracting the content and structured features we devised our mirror prompts: Write a n word paragraph in the style of author in v voice given the content below.\\n plot. We generated mirror responses using GPT-4o and Claude-3.5 Sonnet, creating 560 Human-AI pairs with implicit preference for author-written responses. The benchmark consists of 1120 total preference pairs (each duplicated in AB and BA order)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.563, + 0.829, + 0.703 + ], + "angle": 0, + "content": "LMArena LM Arena Zheng et al. (2023) is an open platform for crowdsourced AI benchmarking. A recently released anonymized instructions with responses and preference judgments indicated that creative writing comprises \\(30\\%\\) of instructions, making it one of the three most common interaction types. From 100,000 creative writing samples, we filtered for (1) English content, (2) non-tied preferences, and (3) responses between 100-2,000 words. An initial inspection of the resulting 7,981 samples revealed that many didn't match strict creative writing definitions. We further filtered noisy samples using GPT-4o, resulting in 1,959 pairs. Due to LM Arena being larger in scale than other datasets in the benchmark, we do not include both order variants (AB/BA) in the dataset but ensure that the reference order is balanced within the dataset." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.278, + 0.103, + 0.71, + 0.192 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.201, + 0.828, + 0.232 + ], + "angle": 0, + "content": "Figure 6: Three-Step Editing Pipeline to improve the writing quality of a first draft by: identifying idiosyncrasies, generating rewrites, and implementing the edits." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.241, + 0.573, + 0.258 + ], + "angle": 0, + "content": "A.5 Writing Quality Benchmark Difficulty Analysis" + }, + { + "type": "image", + "bbox": [ + 0.184, + 0.29, + 0.446, + 0.42 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.269, + 0.423, + 0.437, + 0.435 + ], + "angle": 0, + "content": "Worse Writing Sample" + }, + { + "type": "image_footnote", + "bbox": [ + 0.277, + 0.438, + 0.436, + 0.45 + ], + "angle": 0, + "content": "Better Writing Sample" + }, + { + "type": "list", + "bbox": [ + 0.269, + 0.423, + 0.437, + 0.45 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.484, + 0.456, + 0.515 + ], + "angle": 0, + "content": "Figure 5: Gap Analysis of WQ datasets leveraging the WQRM-PR model." + }, + { + "type": "text", + "bbox": [ + 0.465, + 0.267, + 0.828, + 0.394 + ], + "angle": 0, + "content": "In order to understand the relative difficulty of the datasets within the WQ benchmark, we performed an analysis leveraging our trained WQRM model. For each sample (consisting of two writing samples with a known human preference), we computed the WQRM score for each sample, and compiled the result for each of the five datasets in WQRM. Figure 5 plots the average of the preferred vs. less-preferred scores on each dataset." + }, + { + "type": "text", + "bbox": [ + 0.465, + 0.399, + 0.829, + 0.539 + ], + "angle": 0, + "content": "This analysis allows to make several observations. First, the average WQRM gap is directly proportional with model performance on the benchmark. The Synthetic Mirror dataset has the largest average gap according to WQRM-PR (2.4 on average), and we find that many models achieve very close to perfect performance \\((98\\% +)\\) on this dataset. On the other hand, the gap (according to WQRM-PR) is very small on Style Mimic (0.12) and LMArena (0.02), which aligns with many models perform" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.539, + 0.827, + 0.665 + ], + "angle": 0, + "content": "ing at or very slightly above chance on these datasets. Second, the absolute scores for the low and high samples are indicative of the origin of the samples. Style Mimic is the only dataset to include Human-Human comparisons (both written by professionals), and the scores of both the worse and better writing samples are high (7.57 and 7.69). LMArena has a similarly small gap, but achieved with lower pair scores (5.99 and 6.02). Third, we find that the WQ dataset includes a mix of high-gap (easy) and low-gap datasets. For low-gap samples, those can be with both having lower scores (two AI-generated samples), or two high-scoring samples (two human-written samples). This confirms the breadth of evaluation included in the WQ benchmark, which is a primary objective of the WQ benchmark." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.669, + 0.827, + 0.741 + ], + "angle": 0, + "content": "We note that this analysis should be taken with a grain of salt: the WQRM-PR model is not a perfect score predictor, and is only a proxy for analysis, since true scores would require large-scale professional annotation (which is cost-prohibitive). But this analysis matches some expectations, and provides additional evidence of the proper calibration of the WQRM-PR model, and of the breadth of evaluation in the WQ benchmark." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.757, + 0.465, + 0.773 + ], + "angle": 0, + "content": "A.6 Example Human Mimic Samples" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.783, + 0.825, + 0.813 + ], + "angle": 0, + "content": "Table 7 shows an Expert-MFA contrast where both paragraphs are centered around the same semantic content and writing style" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.829, + 0.444, + 0.846 + ], + "angle": 0, + "content": "A.7 Example COT Editing Prompt" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.855, + 0.828, + 0.913 + ], + "angle": 0, + "content": "The prompt in Table 8 is generated automatically based on a sample from the LAMP dataset. An LLM is then finetuned on this prompt, effectively training it to function as a three-step editing pipeline that identifies problematic spans, rewrites the spans, and executes the edits into a final edited response." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.102, + 0.859, + 0.234 + ], + "angle": 0, + "content": "I watched my mother. It was March, and outside, the sun glinted off the sidewalks and the icy edges of the snow. It was Saint Patrick's Day and the nurses brought my mother a square block of green Jell-O that sat quivering on the table beside her. It was the last full day of her life, and my mother did not sleep, she did not wake. She held her eyes still and open. They were the bluest thing in the room, perhaps in all of Duluth. Bluer than the lake. They were the color of the sky on the best day of your life. My mother died fast but not all of a sudden. A slow-burning fire when flames disappear to smoke and then smoke to air. She never once closed her eyes. First they were bitter and then they were bewildered and then they changed again to something else, to a state that I have had, finally, to see as heroic. Blue, blue eyes. Daggers of blue wanting and wanting. To stay, to stay." + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.236, + 0.856, + 0.398 + ], + "angle": 0, + "content": "My mother died on St. Patrick's Day. There was snow outside, and sunlight glanced off the ice and back into her hospital room. A cup of green Jell-O lingered untouched on its tray. My mother was unresponsive, but not quite asleep. Her eyes were open and vivid. There was an alertness in them still, but what she was looking at, I could not say. The bright blue of her irises was like a frozen lake, contrasting the dull wash of beige and gray shrouding the room. The sky lived inside my mother. It lived inside her then, too, even as we bided time together at the very end, knowing there was only one thing left for her to do. Her fading life was a fire flickering down to smoke—it would only last as long as there was something to burn. There was bitterness too in her eyes, then bewilderment, then a stoic, quiet heroism. A commitment to her own dignity. A promise to endure this final test, no matter how unfairly it had been imposed on her. Her eyes were so blue, my mother's eyes, a fierce blue, a frozen lake, a sheen of ice that refused to melt, even as the sun broke it apart." + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.407, + 0.827, + 0.438 + ], + "angle": 0, + "content": "Table 7: Imitation of Original Paragraph (Top Row) from Cheryl Strayed written by an MFA student" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.463, + 0.498, + 0.479 + ], + "angle": 0, + "content": "A.8 Expert Annotation Result Breakdown" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.489, + 0.827, + 0.52 + ], + "angle": 0, + "content": "In Table 7, we present the results of the annotations from experts for each model (GPT-4o, Llama 3.1 70b) and writing domain (fiction, nonfiction, marketing)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.524, + 0.828, + 0.664 + ], + "angle": 0, + "content": "At a high level, the responses selected by the WQRM model (Best Edit) achieve the best average rank in all six conditions. However, the selection aligns more with expert preference (in other words, the preference is more pronounced) for the fiction domain (rather than nonfiction) and for GPT-4o responses (rather than Llama 3.1 70b). We posit that this is due to the distribution of training data for the WQRM model, which included a majority of fiction samples and did not include Llama-generated responses. However, the fact that preference is still observed on the other domains (including marketing differs widely from fiction writing) is encouraging. Improving the generalization of the WQRM further can be accomplished by collecting annotations in additional writing domains, which can be used to train an improved WQRM model." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.681, + 0.31, + 0.697 + ], + "angle": 0, + "content": "A.9 Comparison" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.707, + 0.826, + 0.751 + ], + "angle": 0, + "content": "Table 9 shows 3 different versions of the same paragraph. First Draft along with edited versions (Random and Best Edit) with respect rewards from WQRM. Experts rank this triplet as Best Edit > Random Edit > First Draft." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.767, + 0.434, + 0.782 + ], + "angle": 0, + "content": "A.10 Expert Annotation Interface" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.793, + 0.825, + 0.823 + ], + "angle": 0, + "content": "Figure 8 shows the annotation interface that is provided to experts. They read 3 responses and rank them based on overall quality." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.167, + 0.812, + 0.226 + ], + "angle": 0, + "content": "You are given a paragraph of creative writing. Your task is to improve the quality of the writing. You must identify specific spans that can be improved, then propose rewriting for each identified span, and finally return the entire paragraph with the proposed changes implemented." + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.251, + 0.469, + 0.268 + ], + "angle": 0, + "content": "Here is the paragraph you are editing:" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.271, + 0.814, + 0.445 + ], + "angle": 0, + "content": "The room was dimly lit, with the soft hum of machinery filling the silence. I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly. The screen flickered to life, a grainy black and white, like an ancient TV trying to find it's signal. Slowly, an image began to form; the unmistakable curve of a tiny head, the flutter of something I later learned was a heartbeat. I leaned forward, almost holding my breath, mesmerized by the play of light and shadows that composed this new life. Lila's grip tightened, and I glanced at her, finding her eyes glossy, a smile tugging at the corners of her mouth. I turned back to the screen, emotions swirling 2014awe, fear, hope. In that moment, the abstract idea of our baby transformed into something real, tangible. The room's dimness seemed to fold around us, making space for this silent, wordless connection. There, in the ghostly shapes on the monitor, I saw not just a child, but a future 2014a future that felt both incredibly close and impossibly distant." + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.469, + 0.465, + 0.487 + ], + "angle": 0, + "content": "Part 1: Identifying Problematic Spans" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.51, + 0.811, + 0.542 + ], + "angle": 0, + "content": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' (Category: 'Awkward Word Choice and Phrasing')" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.544, + 0.81, + 0.576 + ], + "angle": 0, + "content": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' (Category: 'Lack of Specificity and Detail')" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.601, + 0.559, + 0.617 + ], + "angle": 0, + "content": "Part 2: Proposing Rewriting for Problematic Spans" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.642, + 0.808, + 0.673 + ], + "angle": 0, + "content": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' \\(\\rightarrow\\) 'The soft hum of machinery filled the room with an unfamiliar yet gentle melody.'" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.676, + 0.81, + 0.72 + ], + "angle": 0, + "content": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' \\(\\rightarrow\\) 'I sat beside Lila, squeezing her hand as she lay on the table while the technician swirled the wand over her rounded belly.'" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.745, + 0.462, + 0.762 + ], + "angle": 0, + "content": "Part 3: Implementing Proposed Edits" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.787, + 0.468, + 0.804 + ], + "angle": 0, + "content": "((Written Response with Edits Executed))" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.816, + 0.828, + 0.861 + ], + "angle": 0, + "content": "Table 8: Example COT editing instruction based on a sample from the LAMP dataset. The model is instruction to: (1) identify problematic spans, (2) rewrite the soans to improve writing, (3) execute all edits into the edited response." + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.103, + 0.391, + 0.207 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.399, + 0.104, + 0.603, + 0.208 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.608, + 0.104, + 0.813, + 0.207 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.187, + 0.226, + 0.391, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.399, + 0.226, + 0.602, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.61, + 0.226, + 0.813, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.356, + 0.828, + 0.414 + ], + "angle": 0, + "content": "Figure 7: Breakdown of results of the expert annotation we conducted for each of the three domains (fiction, nonfiction, marketing) and the two models (GPT-4o, LLama 3.1 70b). Overall, WQRM selection was most aligned with expert preference in the Fiction domain, and for GPT-4o generations." + }, + { + "type": "title", + "bbox": [ + 0.187, + 0.44, + 0.379, + 0.456 + ], + "angle": 0, + "content": "Writing Sample Judgment" + }, + { + "type": "title", + "bbox": [ + 0.187, + 0.463, + 0.297, + 0.476 + ], + "angle": 0, + "content": "Writing Instruction:" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.478, + 0.726, + 0.487 + ], + "angle": 0, + "content": "A daughter watches their mother during her final day of life that coincides with St Patricks Day in a Duluth hospital, observing her persistently open blue eyes as she transitions from life to death with a fierce desire to remain alive." + }, + { + "type": "title", + "bbox": [ + 0.194, + 0.514, + 0.462, + 0.526 + ], + "angle": 0, + "content": "Please rank these writing samples from most to least preferred:" + }, + { + "type": "title", + "bbox": [ + 0.192, + 0.529, + 0.22, + 0.535 + ], + "angle": 0, + "content": "Sample 1" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.542, + 0.391, + 0.628 + ], + "angle": 0, + "content": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. She wanted more—more of what? More moments? More breaths? Her desire was palpable, and knowing what to do with it felt like holding fragile glass shards. How to speak? What to say? Inside me, everything felt cramped, too many things living at once without space. From the window behind us, I saw the darkness become green with celebration in reflection, and yet I continued to look at her. Her eyes. Her eyes were everything to me now. In those final hours, her gaze cemented in me her love, resilience, and bravery. The world outside no longer mattered." + }, + { + "type": "title", + "bbox": [ + 0.193, + 0.649, + 0.228, + 0.655 + ], + "angle": 0, + "content": "Rank Sample 1" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.657, + 0.276, + 0.664 + ], + "angle": 0, + "content": "Most Preferred Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.665, + 0.277, + 0.671 + ], + "angle": 0, + "content": "Second Favorite Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.672, + 0.276, + 0.678 + ], + "angle": 0, + "content": "Least Preferred Writing Option" + }, + { + "type": "list", + "bbox": [ + 0.194, + 0.657, + 0.277, + 0.678 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.685, + 0.242, + 0.691 + ], + "angle": 0, + "content": "Submit Rankings" + }, + { + "type": "title", + "bbox": [ + 0.399, + 0.529, + 0.424, + 0.535 + ], + "angle": 0, + "content": "Sample 2" + }, + { + "type": "text", + "bbox": [ + 0.4, + 0.542, + 0.597, + 0.628 + ], + "angle": 0, + "content": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. It was as if she was silently insisting on one more moment, one more breath. Her desire to stay with me was palpable, wrapping us both in a fragile embrace. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled with emotions I wasn't ready to unpack. The world outside turned shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. In those final hours, her eyes told stories of love, resilience, and an unwavering fight to anchor herself in this world just a little longer." + }, + { + "type": "title", + "bbox": [ + 0.399, + 0.649, + 0.432, + 0.655 + ], + "angle": 0, + "content": "Rank Sample 2" + }, + { + "type": "text", + "bbox": [ + 0.4, + 0.657, + 0.481, + 0.664 + ], + "angle": 0, + "content": "Most Preferred Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.4, + 0.665, + 0.483, + 0.671 + ], + "angle": 0, + "content": "Second Favorite Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.4, + 0.672, + 0.481, + 0.678 + ], + "angle": 0, + "content": "Least Preferred Writing Option" + }, + { + "type": "list", + "bbox": [ + 0.4, + 0.657, + 0.483, + 0.678 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.605, + 0.529, + 0.63, + 0.535 + ], + "angle": 0, + "content": "Sample 3" + }, + { + "type": "text", + "bbox": [ + 0.606, + 0.543, + 0.801, + 0.614 + ], + "angle": 0, + "content": "In the corridors of the Duluth hospital, it was St. Patrick's Bed, but all the bustle and noise outside felt worlds away. I sat by my mother's bedside. The hum of the machines filled the silence between us. Her blue eyes flicked to me with an intensity that defied her frail body. She was silently insisting on one more moment, one more breath. Her desire to stay with me was almost tangible. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled. The world outside turned in festive shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. Those final hours, the love we shared, her resilience, and her fight to stay tethered to our world remain imprinted on my mind to this day." + }, + { + "type": "title", + "bbox": [ + 0.605, + 0.651, + 0.638, + 0.656 + ], + "angle": 0, + "content": "Rank Sample 3" + }, + { + "type": "text", + "bbox": [ + 0.605, + 0.657, + 0.686, + 0.664 + ], + "angle": 0, + "content": "Most Preferred Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.605, + 0.665, + 0.687, + 0.671 + ], + "angle": 0, + "content": "Second Favorite Writing Option" + }, + { + "type": "text", + "bbox": [ + 0.605, + 0.672, + 0.687, + 0.678 + ], + "angle": 0, + "content": "Least Preferred Writing Option" + }, + { + "type": "list", + "bbox": [ + 0.605, + 0.657, + 0.687, + 0.678 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.388, + 0.713, + 0.61, + 0.728 + ], + "angle": 0, + "content": "Figure 8: Annotation interface" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.743, + 0.741, + 0.76 + ], + "angle": 0, + "content": "A.11 Better Calibrated WQRM model for Content and Quality Experiment" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.77, + 0.828, + 0.899 + ], + "angle": 0, + "content": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, it doesn't fully know how to reward higher-quality human writing. For this purpose, we added 100 paragraphs written by 5 award-winning authors (20 each) to our training data. We chose 5 authors who were part of the Style Mimic data. Each paragraph written by an award-winning author was assigned a score of 10.0. Even within writing from trained professionals, there is significant variability. To address this we source an additional 80 independent paragraphs written by MFA students published in prestigious literary magazines such as Electric Lit, Joyland, Paris Review and add to our training data. Each paragraph written by an MFA student was assigned a score of \\(7.5^{7}\\)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.91, + 0.766, + 0.926 + ], + "angle": 0, + "content": "This was a design decision where 5 is average and 10 is the best, and 7.5 is a mid-point." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.519, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.103, + 0.896, + 0.34 + ], + "angle": 0, + "content": "At 22, I handed my first completed novel to my father, not fully aware of the meaning it held for both of us. He took it, eyes softening as he turned the pages, lingering just enough to let me know he saw more than words. Then came the moment that stayed with me: an embrace that spoke volumes, a quiet prediction slipping past his lips, that one day I'd earn a place among the literary greats. Somehow, those words anchored themselves in my mind, reassuring in their sureness. Through the highs and lows of my writing career, his faith in me never wavered, echoing in my thoughts whenever doubt crept in. His death in 2002 marked an inevitable yet profound shift-suddenly, the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace, and his words, that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and for that, those moments serve as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize he had foreseen, I wished he could have been there to witness it, though in a way he already had been all along. Dreams can be crowded with voices that spur us on or pull us back, but his was the one that guided me, quiet and steady. As I continue to write, the memory of my father's belief remains a compass, gently reminding me of where I've been and where I might yet go \"score\": 3.30" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.342, + 0.872, + 0.562 + ], + "angle": 0, + "content": "At 22, I handed my first completed novel to my father, and in that moment I became aware of the significance it held for us both. He took it and began to read, lingering just enough to let me know he saw more than words. Afterward, he embraced me and said one day I'd earn a place among the literary greats. Before then, he had said little about my writing, and these words anchored themselves in my mind, reassuring in their sureness. He had never said anything like it before, but he continued to echo that faith through the highs and lows of my career. His death in 2002 marked an inevitable yet profound shift. Suddenly the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace and his words that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and those moments served as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize—the only prize—he had foreseen, I wished he could have been there to witness it. Dreams can be hostile to our hopes, but his was the one that guided me; his quietness was steady. Now, the memory of my father's belief remains a compass; I wish I could send him an update. \"score\": 4.43" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.566, + 0.856, + 0.723 + ], + "angle": 0, + "content": "At 22, I handed my first completed novel to my father, not fully prepared for what it might mean. He took it, eyes softening as he turned the pages, lingering long enough, I felt, to take in the feeling of things. Finally, we embraced, and he leaned back to say what I hadn't dared to hope—that one day I'd be among the literary greats. No matter how tough things got or how much death loomed over me, I was comforted by those words, almost sure of their truth. His death in 2002 brought with it an unwelcome heaviness. I found significance even in his old suitcase, which I kept, shuffling through it fondly. There were plenty of other mementos, too, but I'd always have the memory of that embrace, the words. Years later, when I finally stood on stage accepting the literary prize he'd foreseen, I wished he could have been there to witness it. Whatever noise came, whatever doubt, his voice led me quietly out of it. I swear I can still hear him now. \"score\": 6.84" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.735, + 0.825, + 0.764 + ], + "angle": 0, + "content": "Table 9: (a) First Draft (b) Random Edit (c) Best Edit along with their rewards assigned by WQRM." + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.79, + 0.825, + 0.834 + ], + "angle": 0, + "content": "Publication at a venue already means these paragraphs have undergone scrutiny and are of decent quality. After adding these 180 samples to LAMP-PR training set, we retrained WQRM." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.519, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.406, + 0.851, + 0.593 + ], + "angle": 0, + "content": "This paragraph is written in the first person and revolves around a family Christmas gathering. The narrator reflects on how her father gave her a generous cash gift and invited her to Disney World with his new family. The narrator declined, fabricating an excuse about school, despite feeling the emotional distance growing between her, her father, and his new partner, Chitra. The narrators half-sisters, Rupa and Piu, were upset by this decision, not understanding why she doesn't want to join them. The narrator felt a sense of responsibility to uphold the memory of her late mother, just as Rupa and Piu symbolized their own father's legacy, while also sensing that both Chitra and her father are relieved by her decision to stay behind. The paragraph captures the emotional complexities of blended family dynamics, grief, and feelings of displacement during what should be a celebratory time." + }, + { + "type": "table_caption", + "bbox": [ + 0.402, + 0.604, + 0.597, + 0.619 + ], + "angle": 0, + "content": "Table 10: Detailed Content" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "24" + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_origin.pdf b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d1133c5dda1af08ebab0abd5d978c038c7dc0578 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/a56e2d1f-04ce-46c0-9b86-0a610ecd5033_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8f2d215ab3b25b925c5f1344ea976e29e7fc50bb86864fd8fba7c826117d1cb +size 2133363 diff --git a/data/2025/2504_07xxx/2504.07532/full.md b/data/2025/2504_07xxx/2504.07532/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c4fe548eb3133973fdfdcb4eb56acb6ba7eb326d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/full.md @@ -0,0 +1,420 @@ +# AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time Computation + +Tuhin Chakrabarty $^{1*}$ , Philippe Laban $^{2*}$ , Chien-Sheng Wu $^{1}$ + +$^{1}$ Salesforce AI Research $^{2}$ Microsoft Research + +{tuhin.chakr,wu.jason}@salesforce.com,plaban@microsoft.com + +# Abstract + +AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve the writing quality of AI-generated text? Writing quality assessment has received less attention from the community, in part because it is fundamentally subjective and requires expertise. We first introduce the Writing Quality Benchmark (WQ) by consolidating five writing-preference datasets into 4,729 writing quality judgments. Our experiments show that most of the competitive baselines, including state-of-the-art LLMs that excel at reasoning tasks, barely outperform random baselines on WQ. We then train specialized Writing Quality Reward Models (WQRM) of various sizes for writing quality assessment that demonstrate strong generalization on four out-of-distribution test sets and $74\%$ accuracy on the WQ benchmark. To further show WQRM's practical benefits during inference, we leverage additional test-time compute to generate and rank multiple candidate revisions, allowing us to select higher-quality outputs from an initial draft. Human evaluation with 9 experienced writers confirm that WQRM-based selection produces writing samples preferred by experts $66\%$ overall, and $72.2\%$ when the reward gap is larger than 1 point. We release our datasets and models to encourage community engagement with writing quality assessment and development of AI writing systems better aligned with human preferences. + +# 1 Introduction + +Writing is one of the most important pillars of education, enabling learners to critically engage with the topics they study. In *The Rise of Writing Brandt* (2014) argues that the "information economy's insatiable demand for symbol manipulation—'knowledge work'—has forced many workers to reorient their labor around the production of prose" (Laquintano & Vee, 2024). Generative AI tools have further blurred these boundaries, especially around how labor and writing practices are evolving across both academic (Kobak et al., 2024; Lee et al., 2025) and professional contexts (Liang et al., 2025). Often awkward and jarring to read, low-effort text generated by AI is now flooding web browsers and social-media platforms much like spam in old inboxes (Herrman, 2024a; Knibbs, 2024c;d;b;a). This neologistic term of revulsion is often referred to as "A.I. slop" (Herrman, 2024b). Extensive social experimentation with ChatGPT has invited criticism on social media and in the popular news platforms that its writing has a disembodied "robovoice". This has led to humanization methods (Wang et al., 2024) and even start-ups such as StealthGPT or HumanizeAI, which explicitly attempt to make AI-generated text more humanlike. + +Despite LLMs showing impressive performance in math and coding, their ability to write high-quality text has been rather pedestrian. Recent work from Chakrabarty et al. (2024b) shows how text generated from widely used LLMs are often rife with clichés, purple prose, + +![](images/688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg) +Figure 1: Our three key contributions: (1) A new writing quality benchmark for creative writing evaluation, (2) Writing Quality Reward Models (WQRM) that perform strongly on this benchmark, and (3) Expert validation confirming WQRM aligns with professionals. + +poor sentence structure, and unnecessary exposition. This stems from several challenges. Unlike math or coding, writing lacks verifiable rewards. While it would be possible to train a model to write better text by having humans label examples of "good" and "bad" writing, it is challenging due to the required expertise. Self-evaluation using LLMs has proven useful in reward modeling and constitutional AI (Bai et al., 2022), but relying on uncalibrated humans or LLMs for feedback (Lee et al., 2023; Gao et al., 2024) on subjective tasks like writing can lead to reward hacking (Pan et al., 2024) and alignment issues. Recent work from Panickssery et al. (2024) shows the self-aggrandizing nature of LLMs, as evidenced in Table 3 where they prefer their own writing over Nobel Prize winners' work. For the purpose of this paper we define good writing quality as writing that doesn't contain disproportionate amount of peculiar words or phrases, has fewer cliches or hackneyed expressions, is not unnecessarily ornamental as well as doesn't have a overly saccharine and polished tone or voice. + +The surge in AI writing assistance demands urgent alignment of AI-generated text with human preferences. Recent work from Gooding et al. (2025) show how LLMs struggle to select high-quality writing actions as judged by human experts, often treating suboptimal and optimal interventions as equally acceptable. They highlight the need for models to better assess the quality and impact of suggested actions, both during generation and across multi-step refinement. Binary preference feedback between paired examples is the most common alignment method for LLMs (Christiano et al., 2017), but it has a significant drawback. The paired outputs may differ in several ways and could be equally worse in terms of quality (Casper et al., 2023; Lambert & Calandra, 2023).1 Recent work from Chakrabarty et al. (2024b) shows how identifying and editing problematic response segments effectively improves AI alignment. This also reflects the Reviewing phase in the cognitive process model of writing (Hayes et al., 1987), where humans evaluate and revise text. They release LAMP (Language model Authored, Manually Polished), a corpus of $1282 < AI - generated$ , Expert - Edited > pairs with implicit preference (edited > original_draft) to improve AI writing (see Table 4 in Appendix A.1). Additionally, each paragraph pair includes normalized scores (1-10) reflecting writing quality before and after editing. + +Our work builds on LAMP data to train Writing Quality Reward Models (WQRM) across multiple model families using pairwise and scalar rewards. To evaluate WQRM, we introduce the Writing Quality Benchmark (WQ), consolidating five datasets that contrast Human-Human, Human-AI, and AI-AI writing pairs reflecting real world applications. In addition to standard reward models we also implement a teacher-student knowledge distillation approach, fine-tuning open-weight models (students) on LAMP with silver rationales generated from + +stronger LLMs (teachers) (Section 3). This framework enhances faithfulness and robustness by transferring reasoning abilities from powerful teachers to efficient students. Empirical results show our LAMP-trained reward models outperform proprietary LLMs like GPT-4o, o1 (OpenAI, 2024), open-weight models like DeepSeek-R1 (Guo et al., 2025), and competitive Reward-Bench models like Skywork-Reward (Liu et al., 2024). + +Next, we use expert edit interaction traces from LAMP data (Figure 6) to train a Chain-of-Thought editing model that identifies problematic spans, suggests edits, and combines them into a paragraph with improved writing (Section 5). Following recent work that leverages additional inference-time computation to improve LLM performance (Hosseini et al., 2024; Lightman et al., 2023; Wu et al., 2024; Ji et al., 2025; Snell et al., 2024), we employ best-of-N-sampling (Chow et al., 2024; Cobbe et al., 2021; Lightman et al., 2023) to select the best candidate from multiple edited paragraphs based on our reward model. Expert evaluation on LLM-generated responses based on writing instructions across fiction, nonfiction, and marketing confirms the correlation between expert judgment and our reward models. Experts and our best WQRM align in terms of preferences $66\%$ overall, and $72.2\%$ when the reward gap is larger than 1 point. Our results represent progress toward aligning LLMs with expert humans on subjective writing tasks, one of the most common use cases of AI (Handa et al.). As summarized in Figure 1: + +- We introduce the Writing Quality Benchmark (WQ) by consolidating five writing preference datasets and show how state-of-the-art LLMs and reward models perform close to random chance on writing quality assessment, +- We leverage implicit preference from edits to train competitive open weight reward models (WQRM) of different sizes for judging writing quality. Our reward models achieve top performance on the WQ benchmark, +- We use interaction traces from fine-grained expert edits to train an editing pipeline that improves writing quality. We further leverage additional test-time compute to generate and rank multiple edited paragraphs, allowing us to select higher-quality outputs from an initial draft based on our reward model. Evaluation with professionals confirms that the reward aligns with expert judgments and opens up possible avenues for improving alignment in AI-assisted writing.[2] + +# 2 Related Work + +Widespread adoption and Limitations of AI assistance in writing Large language models have rapidly transformed written communications across multiple sectors, with approximately $10 - 24\%$ of text in consumer complaints, corporate communications, job postings, and UN press releases being LLM-assisted by late 2024 (Liang et al., 2025). These adoption rates have stabilized after an initial surge following ChatGPT's release. Outside of technical writing LLMs are also being used for scientific (Liang et al., 2024; Gero et al., 2022) as well as creative writing (Chakrabarty et al., 2024c; Ippolito et al., 2022; Yuan et al., 2022; Mirowski et al., 2023; 2024). Aligning language models with human preferences (Ouyang et al., 2022) has enabled their integration into writing tools such as Google's WorkSpace Labs, Grammarly, and Sudowrite. Despite productivity gains in using AI for writing, several limitations remain with AI-generated text. Prior work (Chakrabarty et al., 2024a;c; Ippolito et al., 2022; Mirowski et al., 2023; Marco et al., 2024) has shown how AI-generated text is often rife with clichés, lacks nuance, subtext, and rhetorical complexity. Through use of syntactic templates Shaib et al. (2024) show the repetitiveness of AI-generated text in comparison to human-written references. More recently Russell et al. (2025) show that AI-generated text is most easily detectable by its characteristic vocabulary, followed by formulaic writing structures and lack of originality. Neither paraphrasing nor humanization effectively removes all of these signatures. + +Human-AI Alignment in Writing Recent work from Lee et al. (2024) highlights how LLMs have transformed the processes behind writing, establishing new criteria for future AI writing assistants. Anderson et al. (2024) and Laban et al. (2023) discovered that Large Language Models assisted users in generating more detailed ideas. However, these studies also + +found that the outputs were less semantically distinct across different users (Padmakumar & He, 2023), and participants reported feeling diminished responsibility for the ideas they produced. In a similar vein Li et al. (2024) explores people's attitudes toward AI writing assistants, finding that while many value and prefer AI assistance for creative tasks and productivity gains, this comes with potential drawbacks in reduced accountability and diversity in writing outcomes. Liu et al. (2025) introduce eRevise+RF, an automated writing evaluation system designed to assess student essay revisions and offer formative feedback. The system was deployed with 406 students across three schools, demonstrating effectiveness in evaluating evidence usage, identifying revisions, and determining revision success. Prior work from Pan et al. (2024) shows language models can enhance outputs through feedback. However, iterative self-refinement using another language model as evaluator may lead to reward hacking, where models exploit evaluator weaknesses. Chakrabarty et al. (2024b) shows how LLMs across different model families share common writing idiosyncrasies and how automatically editing these idiosyncrasies improves alignment, based on a behavioral study with 12 writers. + +Unlike prior work that has focused either on detecting/addressing issues in AI writing our work introduces Writing Quality Reward Models (WQRMs) trained on expert edits that outperform state-of-the-art LLMs on a Writing Quality benchmark. + +# 3 Writing Quality Reward Models + +![](images/3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg) +Figure 2: Transforming LAMP annotations into classification and regression data points used during fine-tuning of WQRM models. + +We rely on the LAMP (Language model Authored, Manually Polished) corpus from Chakrabarty et al. (2024b) to train reward models. As illustrated in Figure 2, each sample in LAMP consists of a writing instruction and two paragraphs that match this instruction. The paragraphs in LAMP range from 150 to 400 words, and span across fiction and non-fiction. Table 4 in Appendix A.1 shows a sample from LAMP, highlighting the edits implemented by an expert to improve writing quality. We use three methods to transform LAMP samples into training and validation data points for our models: pairwise (P), scalar (R), and combined (PR). With the P method, each data point presents two paragraphs as input (1 and 2) and requires a binary classification output + +indicating which paragraph has higher writing quality (i.e., the output is 1 or 2). Each LAMP sample is duplicated into two P data points by considering both paragraph orders (AI-generated, Expert-Edited $\rightarrow$ 2) and (Expert-Edited, AI-generated $\rightarrow$ 1). With the R method, each data point takes a single paragraph as input and outputs a regression value predicting the quality score of that paragraph. Since each LAMP sample contains two paragraphs (before and after edit), it generates two R data points. The PR method combines both approaches, yielding four data points per LAMP sample (two from P and two from R). There are a total of 1,282 samples in LAMP, and we follow the author's split divisions of 1,000 training, 67 validation, and 215 test samples. Applying the data transformation described above, the P, R, and PR variants of the training data we obtain consist of 2,000, 2,000, and 4,000 training data points, respectively. For our experiments, we trained both generative LLMs (Llama3.1 (Dubey et al., 2024)) and encoder-only models (ModernBert (Warner et al., 2024)). + +Encoder-Only WQRM We follow the standard approach introduced in the original BERT paper (Devlin et al., 2019) to add and finetune two task-specific heads to a ModernBERT-Large model (Warner et al., 2024). The input data points contain either one paragraph (for R data points) or two paragraphs (for P data points), which are encoded jointly with a pre-defined separator token when needed. For each paragraph, we compute a "paragraph vector" by pooling the last layer's activations across all tokens in that paragraph. These paragraph vectors serve as input to either a regression (R) or classification (P) head. The + +regression head transforms the vector through a learned linear projection from the model's inner dimension to a scalar, followed by a scaled sigmoid to align with the 1-10 score range. The classification head is aparametric, using a cosine similarity operation between the two paragraph vectors. We use mean-squared error loss for R data points and cross entropy for P data points. Following convention for encoder-only models, we finetune the entire model's weights (Devlin et al., 2019). We selected ModernBERT-Large, the largest available model, for our experiments. We fine-tuned three variants: MBERT-WQRM-P, MBERT-WQRM-R, and MBERT-WQRM-PR, each on their corresponding data variants. Hyperparameters, including learning rate and number of epochs, were optimized by minimizing validation loss. PR models can be used in either P- or R-mode at test-time. Initial evaluation indicated that PR models achieve higher performance in R-mode, and as such we used all PR models in R-mode by default during evaluation. + +Generative WQRM We finetune generative transformer architectures by converting classification and regression tasks to sequence-to-sequence problems using JSON output format (Table 5). We employ QLora (Dettmers et al., 2023) parameter-efficient tuning with FSDP (Zhao et al., 2023) and cross-entropy loss. Generative methods can produce natural-language rationales alongside predictions for interpretability. Wiegrefe et al. (2020) demonstrated label-rationale association as essential for response faithfulness, while (Ludan et al., 2023; Hase & Bansal, 2021) argued for incorporating explanations in model input/output to improve robustness against spurious cues. Since LAMP lacks expert rationales, we augment it with LLM-generated silver rationales. We collected five examples from professional writers showing either paragraph strength contrasts (P-style) or holistic critiques/praise (R-style), instructing them to cite specific excerpts. These expert rationales serve as demonstrations for Claude3.5 Sonnet3 to generate rationales (examples in Table 6, Appendix A.3). + +The rationale augmentation is then used in two variants, either providing the rationales on the input $(\mathrm{IR}\rightarrow \mathrm{O})$ , or requiring the generative model to produce the rationale as part of its output $(\mathrm{I}\rightarrow \mathrm{RO})$ . We note that rationales are not available at test-time, and are only included during training as an augmentation technique. We finetune a total of seven variants, all based on LLama 3.1 70b model: Llama-WQRM-P, Llama-WQRM-R, Llama-WQRM-PR, Llama-WQRM-P-IR $\rightarrow \mathrm{O}$ and Llama-WQRM-P-I $\rightarrow \mathrm{RO}$ , Llama-WQRM-PR-IR $\rightarrow \mathrm{O}$ and Llama-WQRM-PR-I $\rightarrow \mathrm{RO}$ , based on different versions of the training data, and tune hyperparameters by minimizing validation loss. + +# 4 The Writing Quality Benchmark + +
DatasetPair OriginAnnotatorLenN
Art or Artifice\( \text{或或}/\text{或或} \)Expert1.5-3k144
LAMP-test\( \text{或或}/\text{或或} \)Expert200-4001,206
Style Mimic\( \text{或或} \)Expert200-400300
Synth. Mirror\( \text{或或} \)Expert200-4001,120
LM Arena\( \text{或或} \)Crowd200-2.5k1,959
+ +Table 1: Writing Quality benchmark composition. Pair Origin: evaluated pairs are AI-generated (♂) or human-written (♀); Len: #words in evaluated responses; N: total evaluation pairs contributed to the benchmark. + +We create the first benchmark centered on the task of writing quality assessment by collecting five relevant datasets and standardizing their data formats into a pairwise preference task. The task in the benchmark consists of a writing instruction and two writing responses, with a binary label indicating which of the two responses has higher writing quality. Table 1 lists the five datasets we selected for the benchmark, along with key properties of each dataset that lead to a comprehensive benchmark for writing quality. We include three datasets that involve AI-AI comparisons (Art or Artifice (Chakrabarty et al., 2024a), LAMP-test + +(Chakrabarty et al., 2024b), and LM Arena (Zheng et al., 2023)), three that involve AI-Human comparisons (Art or Artifice, LAMP-test, and Synthetic Mirror), and one that involves Human-Human comparisons (Style Mimic) (Anonymous, 2025). This diversity ensures that models that perform well on the benchmark can judge writing quality regardless of whether the response was LLM generated or human-written. + +To assess writing quality prior work has argued for evaluation by professionals (ones with writing experience). Nevertheless, some writing quality preference datasets are based on + +crowd-sourced judgments. We include four datasets based on expert judgments and one dataset based on crowd-sourced annotation (LM Arena) to represent both perspectives in the benchmark. Finally, we selected two datasets with long responses (Art or Artifice, LM Arena) and three with shorter responses ranging from 200-400 words, ensuring that models that perform well on the benchmark are capable of judging writing quality irrespective of length. Appendix A.4 details the procedure we followed to extract and standardize each dataset. Appendix A.5 provides an analysis we conducted on the relative difficulty of each dataset in the benchmark, finding that the five selected datasets provide a breadth of coverage in terms of difficulty. + +Writing Quality Benchmark + +
ModelSynthetic MirrorArt or ArtificeLAMPStyle MimicLM ArenaOverall (↑) All
MIRRMIRR/MIRRMIRR/MIRRMIRRMIRRAll
MBERT-WQRM-PR99.880.672.667.351.074.3
MBERT-WQRM-R100.080.676.159.351.073.4
MBERT-WQRM-P99.554.271.267.046.867.7
Llama3.1 - P - IR → O100.080.574.943.052.870.2
Llama3.1 - PR - IR → O99.669.473.754.350.169.4
Llama3.1 - PR - I → OR99.176.371.742.655.268.9
Llama3.1 - P - I → OR99.975.174.138.649.167.3
Llama3.1 (70b) - PR94.852.071.340.644.360.6
Llama3.1 (70b) - P88.145.171.735.647.757.6
Llama3.1 (70b) - R44.850.040.350.054.347.9
Pangram100.072.656.547.348.465.0
O367.785.441.467.559.664.3
Skywork-8B-v0.290.368.154.234.055.860.5
GPT-4o (5FS)39.568.840.367.355.554.3
O125.867.439.868.756.751.7
DeepSeek-r131.554.939.247.357.046.0
GPT-4o7.556.237.847.755.440.9
+ +Table 2: Writing Quality Benchmark results. We evaluate zero-shot and few-shot LLMs, generic reward models, AI-detection models, and our fine-tuned models. + +# 4.1 Experimental Results on WQ + +Our experiments on the WQ benchmark include four classes of models. First, Zero-Shot (ZS) and Few-Shot (FS) methods with top-performing instruction-tuned LLMs. We included both non-reasoning (GPT-4o) and reasoning models (Deepseek-R1, O1). Second, a top-performing generic reward model - SkyWork-8b-v0.2 - based on results on the RewardBench leaderboard (Lambert et al., 2024). Third, we include the Pangram AI-detector $^4$ , accessed through API. Finally, the trained WQRM models in generative and encoder-only settings as described in Section 3. Models that can produce pairwise judgments (such as SkyWork or WQRM-P models) were used as is, but for models that produce scalar rewards (WQRM-R, Pangram), a scalar reward was computed for each response, and inequality was applied to emit a pairwise preference. Scalar rewards can theoretically lead to a tie (a score difference of less than an epsilon like 0.001), but we observe few of these in practice (less than $0.1\%$ of pairs), and resolve those randomly. + +Experimental results are summarized in Table 2. First, we find that all the LLMs used in zero-shot settings perform below or a few percentage points above a random baseline of $50\%$ . The performance is particularly low on portions of WQ that involve AI-human preference pairs. This confirms prior findings that LLMs used in LLM-as-a-judge settings tend to prefer AI-generation over human-writing (Panickssery et al., 2024). The O1 and R1 reasoning models do not significantly outperform their non-reasoning counterparts, indicating that out-of-the-box COT-style reasoning, useful for math or coding tasks doesn't improve writing quality assessment. O3 shows improvement on Synthetic Mirror and Art or Artifice showing some promise. Finally, adding five few-shot examples to GPT-4o does help improve performance from 40.9 to 54.3, however further experiments with additional + +in-context examples did not lead to further gains, confirming that few-shot examples in the instruction are not sufficient to achieve strong performance on WQ. + +The generic reward model – Skywork-8b-v0.2 – achieves an overall accuracy of 60.5, with strong performance on Synthetic Mirror and Art or Artifice. Though better than random, the overall performance is much lower than the $93\%$ performance the model achieves on RewardBench, indicating that reward models geared for instruction-following evaluation are not effective at writing quality assessment out-of-the-box. + +The Pangram AI detection system achieves a total performance of $65.0\%$ , the top performance for untrained models. Pangram achieves near-perfect performance on Synthetic Mirror and the AI-Human pairs of Art or Artifice. On samples that do not involve distinguishing between AI and human text, Pangram achieves near-random performance. In other words, AI-detection tools only correlate with writing quality assessment when an AI-generated text is judged to be worse than human-written text. + +Finally, the trained WQRM models achieve top-performance on the benchmark. The Llama-based models achieve their strongest performance in the $\mathrm{IR} \rightarrow \mathrm{O}$ settings, confirming that augmenting the training data with rationales is beneficial, with models that can generate rationales alongside their prediction. The ModernBERT-based models achieve the highest overall accuracy of $74.3\%$ , with the PR variant outperforming the P and R models, indicating that pairwise and reward-based training can be complementary. While its surprising to see a smaller model outperform Llama3.1-70B it could be due to PEFT or the way the loss function is optimized. Future work can focus on bridging this gap. + +We observe that generative WQRM models perform best in P-mode, whereas encoder models perform best in R-mode. We emit a hypothesis for this reversal of relationship, related to the choice of loss. The generative models (Llama) are trained with a sequence-to-sequence loss, whereas the encoder-only models (MBert) are trained with custom losses (pairwise classification for P, mean-squared error for R). In other words, LLama training on the reward-based data is more similar to 10-way classification than actual score regression, whereas the MBert training makes better use of the reward-based data. This leads the MBERT-R models to outperform MBert-P models, whereas the reverse is true for the LLama models, as they are not able to properly take advantage of the R-based data. + +Looking at performance on individual datasets, Synthetic Mirror is the the easiest dataset, with eight models achieving near-perfect performance. Some models achieve $80\%+$ performance on Art or Artifice, indicating that long-context evaluation is challenging but achievable. Style Mimic and LM Arena are the most challenging in terms of accuracy. Style Mimic is likely challenging as it is the only dataset that involves comparisons that do not involve AI-generated text, but two relatively high-quality human-written candidates. LM Arena is challenging to all systems, with top performance at $57\%$ by Deepseek-R1. This low performance could be due to the crowd-sourced nature of LM Arena, with the dataset representing much broader and potentially noisier judgments. Though our trained WQRM models outperform baselines by almost $10\%+$ overall, there remains wide room for improvement: writing quality assessment remains an open challenge to the community. Additional analysis in upcoming Sections refers to the top-performing model - MBERT-WQRM-PR - simply as WQRM. + +# 5 Editing Pipeline with Test-Time Compute + +To better understand the practical value of the WQRM model, we integrate it into a text-editing pipeline to produce LLM-generated candidates of higher-quality according to WQRM scores. We first introduce the editing pipeline and candidate generation procedure, and then describe the large-scale preference annotation we conducted with professional writers to validate WQRM as part of an editing pipeline. + +# 5.1 Generating edits via Supervised Finetuning + +Prior work from Chakrabarty et al. (2024b) shows experimentally that LLMs' text idiosyncrasies (cliches, redundancy, lack of subtext, etc.) can be mitigated through self-editing in an in-context setup. Borrowing motivation from them we teach LLMs how to improve + +their response via edits. Figure 6 illustrates the three components of the editing pipeline. Given a first draft response to an instruction from any given LLM, the first step consists of identifying and listing idiosyncrasies: spans in the first draft that can be rephrased to improve overall writing quality. For each identified idiosyncrasy, a second stage consists in rewriting the idiosyncrasy. This is framed as an executable edit (Laban et al., 2023), where each edit consists of replacing an original string in a draft with an improved version. The third step simply executes all edits (by applying a series of string replace operations) to obtain the final edited draft. While Chakrabarty et al. (2024b) implemented this through prompt-chaining (Wu et al., 2022) with few-shot examples, we improved efficiency by supervised fine-tuning of GPT-4o and Llama3.1 70B based on the entire LAMP training set. The training input consists of the first draft alongside the entire edit interaction trace (detect, rewrite, execute) in a step-by-step chain of thought prompt, and the output is the edited paragraph. See Appendix A.7 for an example COT prompt. + +# 5.2 Selecting edited response by leveraging Test-Time Compute + +Recent work from Snell et al. (2024) shows that test-time compute can be scaled optimally by using a reward model to search over the space of solutions. This approach typically involves generating multiple candidate responses and using a verifier to select an optimal response (Cobbe et al., 2021). The most popular technique to increase test-time compute is Best-of-N sampling also known as Rejection Sampling, in which N candidates are generated independently. The reward model is then used to score each candidate, and the top-scoring candidate is selected. While test-time scaling is effective for reasoning tasks, our work aims to measure whether it is a practical strategy to improve human-AI alignment in subjective tasks such as writing. Next we describe the validation study with experts to measure how well calibrated our WQRMs are to human judgment and whether additional test-time computation leads to meaningful improvements in AI writing quality. + +# 6 How well calibrated are our reward models? + +We generated 100 draft responses (50 GPT4-o, 50 Llama3.1 70B) based on 90 writing instructions spanning 3 domains: literary fiction, non-fiction, and product marketing. For literary fiction and non-fiction we create the instructions through instruction back-translation (Li et al., 2023) conditioned on expert-written paragraphs in Anonymous (2025) and news articles in the data from Russell et al. (2025). Marketing writing instructions were based on products recommended in WireCutter articles across the Home, Kitchen and Tech sections. The right portion of Figure 1 summarizes the process we follow to leverage test-time compute. Specifically, we obtain a first draft from a LLM (GPT4o or Llama3.1 70B) followed by drawing $N = 20$ candidate edited responses from the respective SFT model (Section 5.1)6, and score each candidate with the WQRM model. We filter out any candidate that scores lower than the first drafts, and then form response triplets by selecting the first draft, a randomly-selected edited response (random edit), and the Best-of-N candidate response according to WQRM (Best Edit) (See example triplet in Table 9). We recruited 9 professional writers through mailing lists from top MFA programs in the US. They were asked to rank three responses based on its overall quality (See Figure 8 for interface). Each response triplet were annotated by three experts, which we aggregated into a majority rank. Participants completed annotation in batches of 10 triplets at a time, and were paid $100 per batch. + +# 6.1 Study Findings + +Figure 3 summarizes findings from the expert annotation. In Figure 3a, we plot the distribution of rankings across all triplets. Best Edit candidates were most preferred overall with an average rank of 1.58, followed by random edit (2.09) and first draft (2.26). The breakdown of rankings across domains (fiction, non-fiction, marketing) or LLM (GPT-4o vs. Llama 3.1) is presented in Appendix A.8. In short, Best Edit achieves the top rank in all conditions, confirming the generalization of WQRM scores across conditions. + +If the reward model is well-calibrated, the WQRM score gap between responses should indicate their qualitative difference. For example, responses scoring 4 and 6 should have a larger + +![](images/1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg) +(a) Expert Ranking Distribution + +![](images/70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg) +(b) Gap vs. Agreement + +![](images/1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg) +(c) Sensitivity Analysis + +![](images/16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg) +Figure 3: Results and analysis of WQRM based: (a) distribution of preference based on 300 expert triplet rankings, (b) calibration between gap in WQRM scores and matching expert preference, and (c) applying experts edits gradually to a draft leads to gradual reward gains. +(a) Less content detail in writing prompt +Figure 4: Writing quality analysis of human-written and LLM-generated texts according to WQRM on (a) less and (b) more content detail in the writing prompt. Prompts with less content detail average 30 words, whereas prompts with more content detail average 180. + +![](images/9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg) +(b) More content detail in writing prompt + +quality gap than those scoring 4 and 4.5. To inspect WQRM calibration, we computed the WQRM gap between all annotated response pairs and plotted it against expert annotation agreement. As shown in Figure 3b, WQRM gap positively correlates with expert agreement: when responses differ by $\leq 0.5$ points, individual experts prefer the higher-scoring response only $55\%$ of the time. When the gap exceeds 3.0, this increases to $80\%$ . Agreement with majority rank based on three expert annotations (green line) shows even stronger positive correlation. In short, we find evidence that WQRM is well-calibrated: a wider gap in scores between two responses is evidence that an expert (or group of experts) would be more likely to prefer the higher-scoring response over the lower-scoring response. + +Besides calibration, we analyze the sensitivity of the WQRM model to minor edits and their impact on writing quality. The LAMP dataset consists of drafts that are edited by expert writers to improve writing, with samples comprising of eight edits per passage on average. We implement a gradual version of the LAMP-test set, where each expert edit is reversed, and we execute them one at a time, computing the WQRM score at each intermediate step. Results from the gradual LAMP-test are summarized in Figure 3c: each time an additional edit is implemented, the median WQRM score increases by 0.2, even though WQRM was not trained on intermediate responses and only saw samples where no edit or all edits have been applied. In summary, we find evidence that minor edits to a response will lead to small but significant changes in WQRM scores, indicative of a fine sensitivity of the reward model. + +# 7 How does content affect writing quality? + +Effectively judging writing quality impacts both understanding and improving LLM writing. Writing quality is however closely tied to content. Its known that LLMs struggle with novel ideas (content planning), making their writing appear trite. Even with detailed original content, they struggle to maintain good writing standards (avoiding clichés, revealing subtext, and introducing purple prose). To understand how content affects writing quality, we analyzed writing from several LLMs with and without detailed content. We used 50 writing instructions from Style Mimic data, creating two variants: a 30-word prompt with less detail (e.g., "A family Christmas unfolds through emotional reflections on a father's new family, a daughter's excuse to stay behind, and the complex dynamics of grief and blended identities.") and a 150-200 word detailed prompt (Table 10 in Appendix). Style Mimic provides an original excerpt from an award-winning author and an MFA student's attempt to mimic that style for each prompt. Each sample includes the detailed content used for 4b. + +Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, we retrained a better calibrated reward model with few fully human written high quality text (See Appendix A.11 for more details). Figure 4a shows writing quality scores from the WQRM model when prompts lack detailed content. Award-winning authors achieve a median score of 8.9, while LLMs score 4.8-6.6 with much higher variance. Despite WQRM being trained only on AI-generated paragraphs edited by MFA students and relatively fewer human written samples, it scored 50 author-written texts higher than all LLMs, demonstrating model generalization. GPT4.5, though considered the best writing LLM, showed no quality advantage. The significant gap between awardwinning authors and LLMs shows that in the absence of original good-quality content, all LLMs are poor writers. + +Figure 4b shows the writing quality of several LLMs leveraging the new WQRM model when detailed content is provided in the writing prompt. As a matter of fact the content detail is often $0.5\mathrm{x}$ to $0.75\mathrm{x}$ times the word count of the paragraph to be written/generated. Results with the detailed prompts provide additional insights. Though the variance remains high for all models, the more recent models (GPT-4.5, Claude 3.7-Sonnet, Gemini-2.5-pro) achieve improved writing quality given the more detailed prompts, achieving median scores of around 7.0. This should not be surprising as the amount of details provided in the writing prompt reduces the burden for originality and novelty from the LLM. What is particularly impressive here is paragraphs written by MFA students based on the same detailed content were rated significantly higher than all LLMs with a median of 8.6. The gap between award-winning authors and MFA students is narrow here, although the distribution from MFA students shows higher variance. Our results highlight that even when provided with very detailed original content, LLMs are far behind trained writers. + +In summary, the analysis reveals that current LLMs are not yet capable of reliably generating high-quality creative writing at the level of an MFA student or award-winning author, especially when not spooned with original content. When provided with enough content detail in the prompt, the latest models show promise but still remain unreliable. + +# 8 Conclusion + +In this work, we introduced the Writing Quality benchmark (WQ) and Writing Quality Reward Models (WQRM) to address the critical challenge of evaluating and improving the quality of AI-generated text. Our models trained on implicit preference via edits significantly outperform existing approaches, achieving $74\%$ accuracy on the WQ benchmark and demonstrating strong generalization across diverse writing contexts, as confirmed by a validation study involving 9 professional writers. Future work can address alternative test time computation such as long chains-of-thought (CoTs) enabling strategies like backtracking and correction of idiosyncrasies for improving writing. While our approach improves AI generated text by reducing idiosyncrasies, it is no where near expert quality writing. However, we hope that our contributions can serve as a catalyst for further research in writing quality assessment and the development of AI writing systems that are more aligned with human preferences. + +# References + +Barrett R Anderson, Josh Hemant Shah, and Max Kreminski. Homogenization effects of large language models on human creative ideation. In Proceedings of the 16th Conference on Creativity & Cognition, pp. 413-425, 2024. +Anonymous. Literary voice reproduction study mfa writers vs. llms in authorial style. In Under Submission, 2025. +Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. +Deborah Brandt. The rise of writing: Redefining mass literacy. Cambridge University Press, 2014. +Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023. +Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. Art or artifice? large language models and the false promise of creativity. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024a. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642731. URL https://doi.org/10.1145/3613904.3642731. +Tuhin Chakrabarty, Philippe Laban, and Chien-Sheng Wu. Can ai writing be salvaged? mitigating idiosyncrasies and improving human-ai alignment in the writing process through edits. arXiv preprint arXiv:2409.14509, 2024b. +Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, and Smaranda Muresan. Creativity support in the age of large language models: An empirical study involving professional writers. In Proceedings of the 16th Conference on Creativity & Cognition, C & C '24, pp. 132-155, New York, NY, USA, 2024c. Association for Computing Machinery. ISBN 9798400704857. doi: 10.1145/3635636.3656201. URL https://doi.org/10.1145/3635636.3656201. +Yinlam Chow, Guy Tennenholtz, Izzeddin Gur, Vincent Zhuang, Bo Dai, Sridhar Thiagarajan, Craig Boutilier, Rishabh Agarwal, Aviral Kumar, and Aleksandra Faust. Inference-aware fine-tuning for best-of-n sampling in large language models. arXiv preprint arXiv:2412.15287, 2024. +Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168, 9, 2021. +Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, 36:10088-10115, 2023. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423/. + +Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. +Bradley Emi and Max Spero. Technical report on the pangram ai-generated text classifier. arXiv preprint arXiv:2402.14873, 2024. +Yang Gao, Dana Alon, and Donald Metzler. Impact of preference noise on the alignment performance of generative language models. arXiv preprint arXiv:2404.09824, 2024. +Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. Sparks: Inspiration for science writing using language models. In Proceedings of the 2022 ACM Designing Interactive Systems Conference, pp. 1002-1019, 2022. +Sian Gooding, Lucia Lopez-Rivilla, and Edward Grefenstette. Writing as a testbed for open ended agents, 2025. URL https://arxiv.org/abs/2503.19711. +Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. +Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, et al. Which economic tasks are performed with ai? evidence from millions of claude conversations. +Peter Hase and Mohit Bansal. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201, 2021. +John R Hayes, Linda Flower, Karen A Schriver, James Stratman, Linda Carey, et al. Cognitive processes in revision. Advances in applied psycholinguistics, 2:176-240, 1987. +John Herrman. Is that ai? or does it just suck? New York Magazine, 2024a. URL https://nymag.com/intelligencer/article/is-that-ai-or-does-it-just-suck.html. +John Herrman. The internet's ai slop problem is only going to get worse. New York Magazine - Intelligencer, 2024b. URL https://nymag.com/intelligencer/article/ai-generated-content-online-slop-spam.html. Accessed: 2025-03-06. +Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457, 2024. +Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. Creative writing with an ai-powered writing assistant: Perspectives from professional writers. arXiv preprint arXiv:2211.05030, 2022. +Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Jia Xu, Linjian Mo, and Min Zhang. Test-time computing: from system-1 thinking to system-2 thinking. arXiv preprint arXiv:2501.02497, 2025. +Kate Knibbs. Confessions of an ai clickbait kingpin. Wired, 2024a. URL https://www.wired.com/story/confessions-of-an-ai-clickbait-kingpin/. Accessed: 2025-03-07. +Kate Knibbs. Scammy ai-generated books are flooding amazon. Wired, 2024b. URL https:// www.wired.com/story/scammy-ai-generated-books-flooding-amazon/. Accessed: 2025- 03-07. +Kate Knibbs. Ai slop is flooding medium. Wired, 2024c. URL https://www.wired.com/story/ai-generated-medium-posts-content-moderation/. Accessed: 2025-03-06. +Kate Knibbs. Some of substack's biggest newsletters rely on ai writing tools. Wired, 2024d. URL https://www.wired.com/story/substacks-writers-use-ai-chatgpt/. Accessed: 2025-03-07. + +Dmitry Kobak, Rita González-Márquez, Emőke-Ágnes Horvát, and Jan Lause. Delving into chatgpt usage in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016, 2024. +Philippe Laban, Jesse Vig, Marti A Hearst, Caiming Xiong, and Chien-Sheng Wu. Beyond the chat: Executable and verifiable text-editing with llms. arXiv preprint arXiv:2309.15337, 2023. +Nathan Lambert and Roberto Calandra. The alignment ceiling: Objective mismatch in reinforcement learning from human feedback. arXiv preprint arXiv:2311.00168, 2023. +Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, et al. Rewardbench: Evaluating reward models for language modeling. arXiv preprint arXiv:2403.13787, 2024. +Timothy Laquintano and Annette Vee. Ai and the everyday writer. PMLA, 139(3):527-532, 2024. +Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. Rlaif vs. rlhf: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023. +Jinsook Lee, A. J. Alvero, Thorsten Joachims, and René F. Kizilcec. Poor alignment and steerability of large language models: Evidence from college admission essays. 2025. URL https://apisemantic scholar.org/CorpusID:277321621. +Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A Alghamdi, et al. A design space for intelligent and interactive writing assistants. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-35, 2024. +Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023. +Zhuoyan Li, Chen Liang, Jing Peng, and Ming Yin. The value, benefits, and concerns of generative ai-powered assistance in writing. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642625. URL https://doi.org/10.1145/3613904.3642625. +Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, et al. Mapping the increasing use of llms in scientific papers. arXiv preprint arXiv:2404.01268, 2024. +Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. The widespread adoption of large language model-assisted writing across society. arXiv preprint arXiv:2502.09747, 2025. +Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023. +Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451, 2024. +Zhexiong Liu, Diane Litman, Elaine Wang, Tianwen Li, Mason Gobat, Lindsay Clare Matsumura, and Richard Correnti. erevise+ rf: A writing evaluation system for assessing student essay revisions and providing formative feedback. arXiv preprint arXiv:2501.00715, 2025. + +Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Explanation-based finetuning makes models more robust to spurious cues. arXiv preprint arXiv:2305.04990, 2023. +Guillermo Marco, Julio Gonzalo, Ramón del Castillo, and María Teresa Mateo Girona. Pron vs prompt: Can large language models already challenge a world-class fiction author at creative text writing? arXiv preprint arXiv:2407.01119, 2024. +Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394215. doi: 10.1145/3544548.3581225. URL https://doi.org/10.1145/3544548.3581225. +Piotr Mirowski, Juliette Love, Kory Mathewson, and Shakir Mohamed. A robot walks into a bar: Can language models serve as creativity supporttools for comedy? an evaluation of llms' humour alignment with comedians. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1622-1636, 2024. +OpenAI. Introducing openai o1 preview. https://openai.com/index/introducing-openai-o1-preview/, 2024. Accessed: 2025-03-20. +Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022. +Vishakh Padmakumar and He He. Does writing with language models reduce content diversity? arXiv preprint arXiv:2309.05196, 2023. +Jane Pan, He He, Samuel R Bowman, and Shi Feng. Spontaneous reward hacking in iterative self-refinement. arXiv preprint arXiv:2407.04549, 2024. +Arjun Panickssery, Samuel Bowman, and Shi Feng. Llm evaluators recognize and favor their own generations. Advances in Neural Information Processing Systems, 37:68772-68802, 2024. +Jenna Russell, Marzena Karpinska, and Mohit Iyyer. People who frequently use chatgpt for writing tasks are accurate and robust detectors of ai-generated text. arXiv preprint arXiv:2501.15654, 2025. +Chantal Shaib, Yanai Elazar, Junyi Jessy Li, and Byron C Wallace. Detection and measurement of syntactic templates in generated text. arXiv preprint arXiv:2407.00211, 2024. +Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. +Tianchun Wang, Yanzhou Chen, Zichuan Liu, Zhanwen Chen, Haifeng Chen, Xiang Zhang, and Wei Cheng. Humanizing the machine: Proxy attacks to mislead llm detectors. arXiv preprint arXiv:2410.19230, 2024. +Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663, 2024. +Sarah Wiegrefe, Ana Marasovic, and Noah A Smith. Measuring association between labels and free-text rationales. arXiv preprint arXiv:2010.12762, 2020. +Tongshuang Wu, Michael Terry, and Carrie Jun Cai. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems, pp. 1-22, 2022. + +Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024. +Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. Wordcraft: story writing with large language models. In Proceedings of the 27th International Conference on Intelligent User Interfaces, pp. 841-852, 2022. +Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, et al. Pytorch fsdp: experiences on scaling fully sharded data parallel. arXiv preprint arXiv:2304.11277, 2023. +Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P Xing, et al. Lmsys-chat-lm: A large-scale real-world llm conversation dataset. arXiv preprint arXiv:2309.11998, 2023. + +# A Appendix + +
Which is better quality writing?
A: One evening – it was our last day – in Tours, we had dinner in a brightly-lit restaurant where the walls were lined with mirrors, frequented by a sophisticated clientele. [...] made with “pig slop,” white and tasteful. [...] expressing resentment at having been treated with contempt because we were not chic customers who ate “à la carte.”Paragraph B is of higher quality +Conciseness and Clarity: Paragraph B is more streamlined[...] tighter and more focused. +Emotional Depth: Paragraph B captures the speaker's evolving realization [...] making the writing more impactful. +Imagery and Language: The use of phrases [...] adds nuance and emotional weight. In contrast, Paragraph A's descriptions is literal and lack [...]
B: We sat at the group table, my father and I, our reflections dim in the restaurant's mirror. The waitress barely glanced our way. I adjusted my glasses, [...] I realize now that what I witnessed that day was not just a meal but the precise moment when social class became visible to me, manifested in a pot of yogurt, in the waitress's averted eyes, [...]
Overall, Paragraph B delivers a more poignant [...]
+ +# A.1 Example LAMP Sample + +Table 4 shows an example from LAMP data with writing prompt and response before and after edit. + +Table 3: LLM preferring its own writing (Paragraph B) over Paragraph A written by Annie Ernaux, 2022 Nobel Prize in Literature. + +
PromptWhat happened while you were sitting in your car, waiting for a call, amidst the torrential rain and chaotic scene in La Conchita?
Edited Resp.As I sat in my car, the rain drummed against the roof, a relentless beat that mirrored the anxiety thrumming through my veins. I was waiting for a call, any call, that would give me a reason to move, to act, to escape the suffocating stillness. La Conchita's streets were a mess. Cars hydroplaning on the flooded roads, People scurrying for cover like ants from a disturbed nest. I watched as a woman, her hair plastered to her face, struggled to free her stuck umbrella, Her eyes darting darted towards me with a fleeting plea for help. I looked away, feeling the weight of my inaction. The rain-soaked world outside seemed to be shrinking, the droplets on my windshield coalescing into tiny mirrors that reflected my own uncertainty. My phone, silent and unyielding, lay on the passenger seat, a-constant reminder of my powerlessness. I thought of all the what ifs, the maybe, the possibilities that hung in the balance. The rain intensified, drumming out a rhythm that seemed to match the beat of my heart. In the chaos, I felt a strange sense of calm, as if the storm was washing away my doubts, leaving only the stark reality of the present. And yet, I remained frozen, waiting for that call, that spark, that would set me in motion.
+ +Table 4: An example from LAMP. Text in bold with strikethrough represents edited span. The normalized Initial and Final Writing Quality scores are 3.3 and 7.0 respectively + +# A.2 Generative WRQM Prompt Formats + +Table 5 shows a P and R style training prompt thats used to train WQRMs + +
P{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given two paragraphs of writing for a given instruction.\nYour task is to determine which paragraph is overall better in terms of writing quality.\nParagraph 1:\nAfter her father's passing, Marina and her family [......]\nParagraph 2:\n[......] had cherished so deeply.\n\nYou must produce your answer in the following JSON format:\n{"preference":"1-2"}\nwhere 'preference' should be "1" if you think Paragraph 1 is better, "2" if you think Paragraph 2 is better.\n", "role": "user"}
{"content": {""preference":"2"},{"role": "assistant"}
R{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given a paragraph of creative writing. You must score it on a scale from 1 to 10, where 1 is the lowest quality and 10 is the highest quality.\nParagraph:\nThe rich history, and recent revitalisation of Santa Barbara are most clear [......]\nYou must produce your answer in the following JSON format:\n{"score": 1}\nwhere 'score' is an integer between 1 and 10.", "role": "user"}
{"content": {""score":"7"},{"role": "assistant"}
+ +# A.3 Example Rationales For Generative WQRM Training + +Table 6 shows the silver natural language rationales for P and R mode. In P its more contrastive in nature while in R its an absolute critique. Both rationales cites excerpts from the original paragraph. + +Table 5: Training prompts in P and R setting for instruction tuning Llama3.1 70B + +
Paragraph 2 is more concise and direct in its storytelling, avoiding the overwrought metaphors and clichéd language found in Paragraph 1. For example, while Paragraph 1 uses flowery phrases like “a delicate bloom unfurling in the wake of a harsh winter” to describe the mother’s recovery, Paragraph 2 simply states “Marina’s mother slowly emerged from her all-consuming grief.” The second paragraph also maintains emotional authenticity without becoming melodramatic. The first paragraph’s phrases like “brick by brick, memory by memory” and “the resilience of the human spirit” feel forced and sentimental, whereas Paragraph 2 conveys the same emotional journey with more restraint and impact. The shorter length of Paragraph 2 also helps maintain the story’s momentum without getting bogged down in unnecessary elaboration.
The paragraph attempts to capture a poignant moment of parent-child separation but relies too heavily on telling rather than showing, with lines like “I felt a pang of guilt only a parent could know” and “I realized I was facing my own reluctance.” The emotional weight of the situation is spelled out rather than revealed through action or specific detail. While the core idea is relatable, the writing lacks distinctive imagery or memorable turns of phrase that would elevate it beyond the obvious. The final metaphor about “running up the charges to fill the space on my lighter bill” feels forced and doesn’t quite land effectively. The narrative maintains a consistent tone but remains in safe, conventional territory without taking any stylistic risks that might make it more compelling.
+ +Table 6: Natural language rationale for P and R modes respectively + +# A.4 Datasets + +Art or Artifice In prior work Chakrabarty et al. (2024a) evaluate writing quality in flash fiction (1,500-2,500 words). The dataset includes 12 writing prompts based on New Yorker stories, each with four responses: the original story plus three LLM-generated versions from GPT-3.5, GPT-4 and Claude v1.3. Three expert annotators ranked all four stories for each prompt, with results aggregated into majority preferences for each story pair. From the 12 + +prompts and all possible response pairs (4C2), the dataset contains 144 preference samples (including both AB and BA orderings). $25\%$ are Human-AI comparisons, while $75\%$ are AI-AI comparisons. + +LAMP-test The LAMP corpus (Chakrabarty et al., 2024b) test set focuses on short-form creative writing (200-400 words), including fiction and non-fiction. It contains 201 triplets, each with a writing instruction and three responses: (1) AI-written, (2) AI-written+AI-edited, and (3) AI-written+AI-edited. Three professional writers ranked responses based on subjective preference, with results combined into a majority vote. For each instruction, all 3 possible response pairs were evaluated, creating 1206 total samples (by duplicating each pair in AB and BA order). Of these, $33\%$ are AI-HumanAI comparisons, and $66\%$ are AI-AI comparisons. + +Style Mimic In recent work, Anonymous (2025) examined if MFA students could mimic award-winning authors' styles. Specifically, 28 MFA students were first given 20 samples written by an award-winning author (such as Haruki Murakami, Yoko Ogawa, Percival Everett, Zadie Smith, Joan Didion), along with their style verbalized in text. They were then provided with a writing instruction to recreate an original paragraph from the author (typically 200-400 words) while imitating the style of the author to the best of their ability. This data includes 150 sample pairs (student imitation vs. original author response), with the original author's work implicitly preferred. All Mirror Human samples are Human-Human comparisons. Table 7 shows an example. + +Synthetic Mirror Prior work on AI-detection (Emi & Spero, 2024) introduced "synthetic mirrors," a two-step approach to generate writing pairs with implicit preferences. First, an LLM creates a mirror prompt from a human-written sample, extracting a plot summary and structured features (tone, style, length). Second, this prompt produces a synthetic mirror: an AI-generated response resembling the original's content and features. We selected 280 paragraphs from New Yorker flash fiction by award-winning authors (such as Alice Munro, Jhumpa Lahiri, Annie Ernaux etc). After extracting the content and structured features we devised our mirror prompts: Write a n word paragraph in the style of author in v voice given the content below.\n plot. We generated mirror responses using GPT-4o and Claude-3.5 Sonnet, creating 560 Human-AI pairs with implicit preference for author-written responses. The benchmark consists of 1120 total preference pairs (each duplicated in AB and BA order). + +LMArena LM Arena Zheng et al. (2023) is an open platform for crowdsourced AI benchmarking. A recently released anonymized instructions with responses and preference judgments indicated that creative writing comprises $30\%$ of instructions, making it one of the three most common interaction types. From 100,000 creative writing samples, we filtered for (1) English content, (2) non-tied preferences, and (3) responses between 100-2,000 words. An initial inspection of the resulting 7,981 samples revealed that many didn't match strict creative writing definitions. We further filtered noisy samples using GPT-4o, resulting in 1,959 pairs. Due to LM Arena being larger in scale than other datasets in the benchmark, we do not include both order variants (AB/BA) in the dataset but ensure that the reference order is balanced within the dataset. + +![](images/609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg) +Figure 6: Three-Step Editing Pipeline to improve the writing quality of a first draft by: identifying idiosyncrasies, generating rewrites, and implementing the edits. + +# A.5 Writing Quality Benchmark Difficulty Analysis + +Figure 5: Gap Analysis of WQ datasets leveraging the WQRM-PR model. +![](images/2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg) +Worse Writing Sample +Better Writing Sample + +In order to understand the relative difficulty of the datasets within the WQ benchmark, we performed an analysis leveraging our trained WQRM model. For each sample (consisting of two writing samples with a known human preference), we computed the WQRM score for each sample, and compiled the result for each of the five datasets in WQRM. Figure 5 plots the average of the preferred vs. less-preferred scores on each dataset. + +This analysis allows to make several observations. First, the average WQRM gap is directly proportional with model performance on the benchmark. The Synthetic Mirror dataset has the largest average gap according to WQRM-PR (2.4 on average), and we find that many models achieve very close to perfect performance $(98\% +)$ on this dataset. On the other hand, the gap (according to WQRM-PR) is very small on Style Mimic (0.12) and LMArena (0.02), which aligns with many models perform + +ing at or very slightly above chance on these datasets. Second, the absolute scores for the low and high samples are indicative of the origin of the samples. Style Mimic is the only dataset to include Human-Human comparisons (both written by professionals), and the scores of both the worse and better writing samples are high (7.57 and 7.69). LMArena has a similarly small gap, but achieved with lower pair scores (5.99 and 6.02). Third, we find that the WQ dataset includes a mix of high-gap (easy) and low-gap datasets. For low-gap samples, those can be with both having lower scores (two AI-generated samples), or two high-scoring samples (two human-written samples). This confirms the breadth of evaluation included in the WQ benchmark, which is a primary objective of the WQ benchmark. + +We note that this analysis should be taken with a grain of salt: the WQRM-PR model is not a perfect score predictor, and is only a proxy for analysis, since true scores would require large-scale professional annotation (which is cost-prohibitive). But this analysis matches some expectations, and provides additional evidence of the proper calibration of the WQRM-PR model, and of the breadth of evaluation in the WQ benchmark. + +# A.6 Example Human Mimic Samples + +Table 7 shows an Expert-MFA contrast where both paragraphs are centered around the same semantic content and writing style + +# A.7 Example COT Editing Prompt + +The prompt in Table 8 is generated automatically based on a sample from the LAMP dataset. An LLM is then finetuned on this prompt, effectively training it to function as a three-step editing pipeline that identifies problematic spans, rewrites the spans, and executes the edits into a final edited response. + +I watched my mother. It was March, and outside, the sun glinted off the sidewalks and the icy edges of the snow. It was Saint Patrick's Day and the nurses brought my mother a square block of green Jell-O that sat quivering on the table beside her. It was the last full day of her life, and my mother did not sleep, she did not wake. She held her eyes still and open. They were the bluest thing in the room, perhaps in all of Duluth. Bluer than the lake. They were the color of the sky on the best day of your life. My mother died fast but not all of a sudden. A slow-burning fire when flames disappear to smoke and then smoke to air. She never once closed her eyes. First they were bitter and then they were bewildered and then they changed again to something else, to a state that I have had, finally, to see as heroic. Blue, blue eyes. Daggers of blue wanting and wanting. To stay, to stay. + +My mother died on St. Patrick's Day. There was snow outside, and sunlight glanced off the ice and back into her hospital room. A cup of green Jell-O lingered untouched on its tray. My mother was unresponsive, but not quite asleep. Her eyes were open and vivid. There was an alertness in them still, but what she was looking at, I could not say. The bright blue of her irises was like a frozen lake, contrasting the dull wash of beige and gray shrouding the room. The sky lived inside my mother. It lived inside her then, too, even as we bided time together at the very end, knowing there was only one thing left for her to do. Her fading life was a fire flickering down to smoke—it would only last as long as there was something to burn. There was bitterness too in her eyes, then bewilderment, then a stoic, quiet heroism. A commitment to her own dignity. A promise to endure this final test, no matter how unfairly it had been imposed on her. Her eyes were so blue, my mother's eyes, a fierce blue, a frozen lake, a sheen of ice that refused to melt, even as the sun broke it apart. + +Table 7: Imitation of Original Paragraph (Top Row) from Cheryl Strayed written by an MFA student + +# A.8 Expert Annotation Result Breakdown + +In Table 7, we present the results of the annotations from experts for each model (GPT-4o, Llama 3.1 70b) and writing domain (fiction, nonfiction, marketing). + +At a high level, the responses selected by the WQRM model (Best Edit) achieve the best average rank in all six conditions. However, the selection aligns more with expert preference (in other words, the preference is more pronounced) for the fiction domain (rather than nonfiction) and for GPT-4o responses (rather than Llama 3.1 70b). We posit that this is due to the distribution of training data for the WQRM model, which included a majority of fiction samples and did not include Llama-generated responses. However, the fact that preference is still observed on the other domains (including marketing differs widely from fiction writing) is encouraging. Improving the generalization of the WQRM further can be accomplished by collecting annotations in additional writing domains, which can be used to train an improved WQRM model. + +# A.9 Comparison + +Table 9 shows 3 different versions of the same paragraph. First Draft along with edited versions (Random and Best Edit) with respect rewards from WQRM. Experts rank this triplet as Best Edit > Random Edit > First Draft. + +# A.10 Expert Annotation Interface + +Figure 8 shows the annotation interface that is provided to experts. They read 3 responses and rank them based on overall quality. + +You are given a paragraph of creative writing. Your task is to improve the quality of the writing. You must identify specific spans that can be improved, then propose rewriting for each identified span, and finally return the entire paragraph with the proposed changes implemented. + +Here is the paragraph you are editing: + +The room was dimly lit, with the soft hum of machinery filling the silence. I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly. The screen flickered to life, a grainy black and white, like an ancient TV trying to find it's signal. Slowly, an image began to form; the unmistakable curve of a tiny head, the flutter of something I later learned was a heartbeat. I leaned forward, almost holding my breath, mesmerized by the play of light and shadows that composed this new life. Lila's grip tightened, and I glanced at her, finding her eyes glossy, a smile tugging at the corners of her mouth. I turned back to the screen, emotions swirling 2014awe, fear, hope. In that moment, the abstract idea of our baby transformed into something real, tangible. The room's dimness seemed to fold around us, making space for this silent, wordless connection. There, in the ghostly shapes on the monitor, I saw not just a child, but a future 2014a future that felt both incredibly close and impossibly distant. + +Part 1: Identifying Problematic Spans + +Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' (Category: 'Awkward Word Choice and Phrasing') + +Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' (Category: 'Lack of Specificity and Detail') + +Part 2: Proposing Rewriting for Problematic Spans + +Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' $\rightarrow$ 'The soft hum of machinery filled the room with an unfamiliar yet gentle melody.' + +Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' $\rightarrow$ 'I sat beside Lila, squeezing her hand as she lay on the table while the technician swirled the wand over her rounded belly.' + +Part 3: Implementing Proposed Edits + +((Written Response with Edits Executed)) + +Table 8: Example COT editing instruction based on a sample from the LAMP dataset. The model is instruction to: (1) identify problematic spans, (2) rewrite the soans to improve writing, (3) execute all edits into the edited response. + +![](images/554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg) + +![](images/431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg) +Figure 8: Annotation interface + +![](images/67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg) + +![](images/83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg) +Figure 7: Breakdown of results of the expert annotation we conducted for each of the three domains (fiction, nonfiction, marketing) and the two models (GPT-4o, LLama 3.1 70b). Overall, WQRM selection was most aligned with expert preference in the Fiction domain, and for GPT-4o generations. + +![](images/49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg) + +![](images/0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg) + +# Writing Sample Judgment + +# Writing Instruction: + +A daughter watches their mother during her final day of life that coincides with St Patricks Day in a Duluth hospital, observing her persistently open blue eyes as she transitions from life to death with a fierce desire to remain alive. + +# Please rank these writing samples from most to least preferred: + +# Sample 1 + +In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. She wanted more—more of what? More moments? More breaths? Her desire was palpable, and knowing what to do with it felt like holding fragile glass shards. How to speak? What to say? Inside me, everything felt cramped, too many things living at once without space. From the window behind us, I saw the darkness become green with celebration in reflection, and yet I continued to look at her. Her eyes. Her eyes were everything to me now. In those final hours, her gaze cemented in me her love, resilience, and bravery. The world outside no longer mattered. + +# Rank Sample 1 + +Most Preferred Writing Option +Second Favorite Writing Option +Least Preferred Writing Option + +Submit Rankings + +# Sample 2 + +In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. It was as if she was silently insisting on one more moment, one more breath. Her desire to stay with me was palpable, wrapping us both in a fragile embrace. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled with emotions I wasn't ready to unpack. The world outside turned shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. In those final hours, her eyes told stories of love, resilience, and an unwavering fight to anchor herself in this world just a little longer. + +# Rank Sample 2 + +Most Preferred Writing Option +Second Favorite Writing Option +Least Preferred Writing Option + +# Sample 3 + +In the corridors of the Duluth hospital, it was St. Patrick's Bed, but all the bustle and noise outside felt worlds away. I sat by my mother's bedside. The hum of the machines filled the silence between us. Her blue eyes flicked to me with an intensity that defied her frail body. She was silently insisting on one more moment, one more breath. Her desire to stay with me was almost tangible. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled. The world outside turned in festive shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. Those final hours, the love we shared, her resilience, and her fight to stay tethered to our world remain imprinted on my mind to this day. + +# Rank Sample 3 + +Most Preferred Writing Option +Second Favorite Writing Option +Least Preferred Writing Option + +# A.11 Better Calibrated WQRM model for Content and Quality Experiment + +Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, it doesn't fully know how to reward higher-quality human writing. For this purpose, we added 100 paragraphs written by 5 award-winning authors (20 each) to our training data. We chose 5 authors who were part of the Style Mimic data. Each paragraph written by an award-winning author was assigned a score of 10.0. Even within writing from trained professionals, there is significant variability. To address this we source an additional 80 independent paragraphs written by MFA students published in prestigious literary magazines such as Electric Lit, Joyland, Paris Review and add to our training data. Each paragraph written by an MFA student was assigned a score of $7.5^{7}$ . + +At 22, I handed my first completed novel to my father, not fully aware of the meaning it held for both of us. He took it, eyes softening as he turned the pages, lingering just enough to let me know he saw more than words. Then came the moment that stayed with me: an embrace that spoke volumes, a quiet prediction slipping past his lips, that one day I'd earn a place among the literary greats. Somehow, those words anchored themselves in my mind, reassuring in their sureness. Through the highs and lows of my writing career, his faith in me never wavered, echoing in my thoughts whenever doubt crept in. His death in 2002 marked an inevitable yet profound shift-suddenly, the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace, and his words, that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and for that, those moments serve as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize he had foreseen, I wished he could have been there to witness it, though in a way he already had been all along. Dreams can be crowded with voices that spur us on or pull us back, but his was the one that guided me, quiet and steady. As I continue to write, the memory of my father's belief remains a compass, gently reminding me of where I've been and where I might yet go "score": 3.30 + +At 22, I handed my first completed novel to my father, and in that moment I became aware of the significance it held for us both. He took it and began to read, lingering just enough to let me know he saw more than words. Afterward, he embraced me and said one day I'd earn a place among the literary greats. Before then, he had said little about my writing, and these words anchored themselves in my mind, reassuring in their sureness. He had never said anything like it before, but he continued to echo that faith through the highs and lows of my career. His death in 2002 marked an inevitable yet profound shift. Suddenly the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace and his words that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and those moments served as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize—the only prize—he had foreseen, I wished he could have been there to witness it. Dreams can be hostile to our hopes, but his was the one that guided me; his quietness was steady. Now, the memory of my father's belief remains a compass; I wish I could send him an update. "score": 4.43 + +At 22, I handed my first completed novel to my father, not fully prepared for what it might mean. He took it, eyes softening as he turned the pages, lingering long enough, I felt, to take in the feeling of things. Finally, we embraced, and he leaned back to say what I hadn't dared to hope—that one day I'd be among the literary greats. No matter how tough things got or how much death loomed over me, I was comforted by those words, almost sure of their truth. His death in 2002 brought with it an unwelcome heaviness. I found significance even in his old suitcase, which I kept, shuffling through it fondly. There were plenty of other mementos, too, but I'd always have the memory of that embrace, the words. Years later, when I finally stood on stage accepting the literary prize he'd foreseen, I wished he could have been there to witness it. Whatever noise came, whatever doubt, his voice led me quietly out of it. I swear I can still hear him now. "score": 6.84 + +Table 9: (a) First Draft (b) Random Edit (c) Best Edit along with their rewards assigned by WQRM. + +Publication at a venue already means these paragraphs have undergone scrutiny and are of decent quality. After adding these 180 samples to LAMP-PR training set, we retrained WQRM. + +This paragraph is written in the first person and revolves around a family Christmas gathering. The narrator reflects on how her father gave her a generous cash gift and invited her to Disney World with his new family. The narrator declined, fabricating an excuse about school, despite feeling the emotional distance growing between her, her father, and his new partner, Chitra. The narrators half-sisters, Rupa and Piu, were upset by this decision, not understanding why she doesn't want to join them. The narrator felt a sense of responsibility to uphold the memory of her late mother, just as Rupa and Piu symbolized their own father's legacy, while also sensing that both Chitra and her father are relieved by her decision to stay behind. The paragraph captures the emotional complexities of blended family dynamics, grief, and feelings of displacement during what should be a celebratory time. + +Table 10: Detailed Content \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07532/images/0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg b/data/2025/2504_07xxx/2504.07532/images/0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg new file mode 100644 index 0000000000000000000000000000000000000000..97115ff4a875b3c2bd83b6820b3c6d4d9b8f868f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc6c137a32b9cf498f47916c1d7e7fb9684e78d7f17d29446180698ebc57ad33 +size 13047 diff --git a/data/2025/2504_07xxx/2504.07532/images/16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg b/data/2025/2504_07xxx/2504.07532/images/16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg new file mode 100644 index 0000000000000000000000000000000000000000..81d8dba9b952a9c01824ea8f35eb0a054dd12d94 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f4f2b832e9bf8fbe0110155bfd4e699f983905e57b12f1f6545d62f8e53f99b +size 47150 diff --git a/data/2025/2504_07xxx/2504.07532/images/1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg b/data/2025/2504_07xxx/2504.07532/images/1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg new file mode 100644 index 0000000000000000000000000000000000000000..21484db5c50d38bf0f66670d313edaba41de7c82 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a84fab0f48e95c173eddd19aed7427ae97e3d863af8b74315432b4514a0ac631 +size 19122 diff --git a/data/2025/2504_07xxx/2504.07532/images/1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg b/data/2025/2504_07xxx/2504.07532/images/1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg new file mode 100644 index 0000000000000000000000000000000000000000..025544d861a761d8a805262051490d5c11f20954 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0cae5e43ad4d3e6ee1e0ef75cc3f6f3d40988c0317b0fe45d996225380491af +size 18006 diff --git a/data/2025/2504_07xxx/2504.07532/images/2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg b/data/2025/2504_07xxx/2504.07532/images/2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..69a9410897f47352511a71b83436e9877fb29e33 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62c7a9dde30054abb5604c69c03c9169c76fa8ec6d9dd0013356804aac4da337 +size 14983 diff --git a/data/2025/2504_07xxx/2504.07532/images/3459785c0ac01b03e2827f0c34f42c6897261eb4c0508044302218241bab4639.jpg b/data/2025/2504_07xxx/2504.07532/images/3459785c0ac01b03e2827f0c34f42c6897261eb4c0508044302218241bab4639.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b332ae5439db59da48e6625bebdf77f39fc109e0 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/3459785c0ac01b03e2827f0c34f42c6897261eb4c0508044302218241bab4639.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66437313eb994b1cf1386b84e94a94d1ee0cb7f0539ced947e18855ecbc774ef +size 183270 diff --git a/data/2025/2504_07xxx/2504.07532/images/3d68dc97a876f3af5134e0b6f154411a088aeaaa45f20a7956ed0e3b8a5c7524.jpg b/data/2025/2504_07xxx/2504.07532/images/3d68dc97a876f3af5134e0b6f154411a088aeaaa45f20a7956ed0e3b8a5c7524.jpg new file mode 100644 index 0000000000000000000000000000000000000000..585b39e157b1ac6ada8986d57fb125b17d31422f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/3d68dc97a876f3af5134e0b6f154411a088aeaaa45f20a7956ed0e3b8a5c7524.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aae73b26262f5c84551812296358800edd2876a763ef56a7ce2c7d8370198950 +size 93015 diff --git a/data/2025/2504_07xxx/2504.07532/images/3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg b/data/2025/2504_07xxx/2504.07532/images/3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0cf25cfac33360b70cf21328eb8afac085f6ef94 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:526622d3fc11c40d8e6051140ae27df0910e936ac38b5868aafacec70cc52225 +size 29653 diff --git a/data/2025/2504_07xxx/2504.07532/images/431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg b/data/2025/2504_07xxx/2504.07532/images/431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f94cfd4efb72f6588c89dbbdd4bd0fe6eaa0d4e2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdbe7ea17dc9f2a495ceb1500cbbb45a2940eb10417af6a9bf7ed9ef30dd416c +size 13144 diff --git a/data/2025/2504_07xxx/2504.07532/images/49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg b/data/2025/2504_07xxx/2504.07532/images/49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg new file mode 100644 index 0000000000000000000000000000000000000000..42cfded38cdde9ff01ba660be45c5b9ac7395967 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2337081095d4a02c84ddf7266010b3d841a03262035414308e50ff2dd617e791 +size 12890 diff --git a/data/2025/2504_07xxx/2504.07532/images/53481bb60dc76404125107849db74655f4babc96c76f0e025a231c030dd3d169.jpg b/data/2025/2504_07xxx/2504.07532/images/53481bb60dc76404125107849db74655f4babc96c76f0e025a231c030dd3d169.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e6f0c175a44cec70cbe389e24c6c5db477f0ad18 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/53481bb60dc76404125107849db74655f4babc96c76f0e025a231c030dd3d169.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1862a506c0ac9af10423fe234254372e2547d06c8bfc7789eb96dd1ed8e6f07 +size 149623 diff --git a/data/2025/2504_07xxx/2504.07532/images/554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg b/data/2025/2504_07xxx/2504.07532/images/554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..85fca7d8b5476328de33684ca2cec57314ebef62 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2d16433e90b33e77a8233c64e2206861e68e424b6274ccab918987491b715e9 +size 12765 diff --git a/data/2025/2504_07xxx/2504.07532/images/609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg b/data/2025/2504_07xxx/2504.07532/images/609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4bea84812fda89992a942d02d00d3fbb2d2b21a6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd19f0597be1415c2dc05bd278a67d97e604792863d825d441a11db07e7b8c50 +size 22720 diff --git a/data/2025/2504_07xxx/2504.07532/images/67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg b/data/2025/2504_07xxx/2504.07532/images/67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d55525e5cd310395f2a75923d8c0076f6f07b479 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6b4bee448e70220a1f468a1a17529ed8cc0c7f7a5f330ac161d0bd11d25f7e8 +size 12781 diff --git a/data/2025/2504_07xxx/2504.07532/images/688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg b/data/2025/2504_07xxx/2504.07532/images/688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ad6af12afc4e388759314a5d985434093d6169e3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a01c6d98c75501e031d53acdcdbe08e77c3afe010e0db02b8aa01f66becd1534 +size 74848 diff --git a/data/2025/2504_07xxx/2504.07532/images/70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg b/data/2025/2504_07xxx/2504.07532/images/70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..55fe7fe13195b351d588c84ade5381d620cf685f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:214de9b6128187849b8c371b9da3ce47ef8f1bc494423fcecf2bf2248696b806 +size 22015 diff --git a/data/2025/2504_07xxx/2504.07532/images/829672b917337c9a73eb40af91f6ec69e742a115a2d9e839b5749586c9021915.jpg b/data/2025/2504_07xxx/2504.07532/images/829672b917337c9a73eb40af91f6ec69e742a115a2d9e839b5749586c9021915.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4935e994b9b6f607fc909e127d6140ed91b02e34 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/829672b917337c9a73eb40af91f6ec69e742a115a2d9e839b5749586c9021915.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fc2e0dc0b6a04023aa7178e46a3c544d8337abeb351df937df01e2838c71370 +size 23307 diff --git a/data/2025/2504_07xxx/2504.07532/images/83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg b/data/2025/2504_07xxx/2504.07532/images/83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1e530c6259d225015dad437151e7adf82c2a7b3f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:92e573b10e462c77ff41ebaa25e802a7af421da3dae071d482fae737529b7db4 +size 12932 diff --git a/data/2025/2504_07xxx/2504.07532/images/9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg b/data/2025/2504_07xxx/2504.07532/images/9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..89d68454d130c87ae8842cd72f0589052df16a1d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e53ed2ae94d1116ecbf897f0a050f2b0bbe12fc364e00c9bf57a0e376c1f463f +size 45080 diff --git a/data/2025/2504_07xxx/2504.07532/images/be9fc66a9c8361b04fe2c324c844e669e41cd99490850abdb1afe6b0c50ace73.jpg b/data/2025/2504_07xxx/2504.07532/images/be9fc66a9c8361b04fe2c324c844e669e41cd99490850abdb1afe6b0c50ace73.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d3e9234d184036b40427b4538bffb08e4f167471 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/be9fc66a9c8361b04fe2c324c844e669e41cd99490850abdb1afe6b0c50ace73.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46d843051c13b9f4e43da26c9c79e7f385323ecd0eb4c373023d55d7b3c7fbd1 +size 145983 diff --git a/data/2025/2504_07xxx/2504.07532/images/e1efa10af565f7bba9f7532bac21e8a2f89791fb4dcb049a6d4a833edbd6aa2f.jpg b/data/2025/2504_07xxx/2504.07532/images/e1efa10af565f7bba9f7532bac21e8a2f89791fb4dcb049a6d4a833edbd6aa2f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c5a56716029a937b9426233a951c156b8b4e5b10 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/images/e1efa10af565f7bba9f7532bac21e8a2f89791fb4dcb049a6d4a833edbd6aa2f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bf39b0194fd227c70e1648f6ea79b3ac01e5ead32996ca59f3b64613dd01cfe +size 157438 diff --git a/data/2025/2504_07xxx/2504.07532/layout.json b/data/2025/2504_07xxx/2504.07532/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3d94732c3b9bd2086713b2279ea45ee86b22406c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07532/layout.json @@ -0,0 +1,11738 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 78, + 504, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 78, + 504, + 113 + ], + "spans": [ + { + "bbox": [ + 105, + 78, + 504, + 113 + ], + "type": "text", + "content": "AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time Computation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "spans": [ + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "text", + "content": "Tuhin Chakrabarty" + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "text", + "content": ", Philippe Laban" + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "inline_equation", + "content": "^{2*}" + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "text", + "content": ", Chien-Sheng Wu" + }, + { + "bbox": [ + 110, + 132, + 380, + 145 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "spans": [ + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "type": "text", + "content": "Salesforce AI Research " + }, + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 112, + 145, + 317, + 156 + ], + "type": "text", + "content": "Microsoft Research" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 112, + 157, + 408, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 157, + 408, + 168 + ], + "spans": [ + { + "bbox": [ + 112, + 157, + 408, + 168 + ], + "type": "text", + "content": "{tuhin.chakr,wu.jason}@salesforce.com,plaban@microsoft.com" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 280, + 196, + 331, + 209 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 196, + 331, + 209 + ], + "spans": [ + { + "bbox": [ + 280, + 196, + 331, + 209 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "spans": [ + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "text", + "content": "AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve the writing quality of AI-generated text? Writing quality assessment has received less attention from the community, in part because it is fundamentally subjective and requires expertise. We first introduce the Writing Quality Benchmark (WQ) by consolidating five writing-preference datasets into 4,729 writing quality judgments. Our experiments show that most of the competitive baselines, including state-of-the-art LLMs that excel at reasoning tasks, barely outperform random baselines on WQ. We then train specialized Writing Quality Reward Models (WQRM) of various sizes for writing quality assessment that demonstrate strong generalization on four out-of-distribution test sets and " + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "inline_equation", + "content": "74\\%" + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "text", + "content": " accuracy on the WQ benchmark. To further show WQRM's practical benefits during inference, we leverage additional test-time compute to generate and rank multiple candidate revisions, allowing us to select higher-quality outputs from an initial draft. Human evaluation with 9 experienced writers confirm that WQRM-based selection produces writing samples preferred by experts " + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "inline_equation", + "content": "66\\%" + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "text", + "content": " overall, and " + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "inline_equation", + "content": "72.2\\%" + }, + { + "bbox": [ + 140, + 221, + 471, + 477 + ], + "type": "text", + "content": " when the reward gap is larger than 1 point. We release our datasets and models to encourage community engagement with writing quality assessment and development of AI writing systems better aligned with human preferences." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 495, + 196, + 509 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 495, + 196, + 509 + ], + "spans": [ + { + "bbox": [ + 105, + 495, + 196, + 509 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 520, + 506, + 675 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 520, + 506, + 675 + ], + "spans": [ + { + "bbox": [ + 104, + 520, + 506, + 675 + ], + "type": "text", + "content": "Writing is one of the most important pillars of education, enabling learners to critically engage with the topics they study. In *The Rise of Writing Brandt* (2014) argues that the \"information economy's insatiable demand for symbol manipulation—'knowledge work'—has forced many workers to reorient their labor around the production of prose\" (Laquintano & Vee, 2024). Generative AI tools have further blurred these boundaries, especially around how labor and writing practices are evolving across both academic (Kobak et al., 2024; Lee et al., 2025) and professional contexts (Liang et al., 2025). Often awkward and jarring to read, low-effort text generated by AI is now flooding web browsers and social-media platforms much like spam in old inboxes (Herrman, 2024a; Knibbs, 2024c;d;b;a). This neologistic term of revulsion is often referred to as \"A.I. slop\" (Herrman, 2024b). Extensive social experimentation with ChatGPT has invited criticism on social media and in the popular news platforms that its writing has a disembodied \"robovoice\". This has led to humanization methods (Wang et al., 2024) and even start-ups such as StealthGPT or HumanizeAI, which explicitly attempt to make AI-generated text more humanlike." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 680, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 680, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 680, + 506, + 715 + ], + "type": "text", + "content": "Despite LLMs showing impressive performance in math and coding, their ability to write high-quality text has been rather pedestrian. Recent work from Chakrabarty et al. (2024b) shows how text generated from widely used LLMs are often rife with clichés, purple prose," + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "spans": [ + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "type": "text", + "content": "arXiv:2504.07532v3 [cs.CL] 12 Aug 2025" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 117, + 720, + 205, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 720, + 205, + 732 + ], + "spans": [ + { + "bbox": [ + 117, + 720, + 205, + 732 + ], + "type": "text", + "content": "**Equal contribution." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 129, + 83, + 482, + 254 + ], + "blocks": [ + { + "bbox": [ + 129, + 83, + 482, + 254 + ], + "lines": [ + { + "bbox": [ + 129, + 83, + 482, + 254 + ], + "spans": [ + { + "bbox": [ + 129, + 83, + 482, + 254 + ], + "type": "image", + "image_path": "688e253e09a81a8610e16e1c055d305155309d9766016f4cc5afccb96ae6bc63.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 258, + 504, + 293 + ], + "lines": [ + { + "bbox": [ + 104, + 258, + 504, + 293 + ], + "spans": [ + { + "bbox": [ + 104, + 258, + 504, + 293 + ], + "type": "text", + "content": "Figure 1: Our three key contributions: (1) A new writing quality benchmark for creative writing evaluation, (2) Writing Quality Reward Models (WQRM) that perform strongly on this benchmark, and (3) Expert validation confirming WQRM aligns with professionals." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 304, + 506, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 304, + 506, + 448 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 506, + 448 + ], + "type": "text", + "content": "poor sentence structure, and unnecessary exposition. This stems from several challenges. Unlike math or coding, writing lacks verifiable rewards. While it would be possible to train a model to write better text by having humans label examples of \"good\" and \"bad\" writing, it is challenging due to the required expertise. Self-evaluation using LLMs has proven useful in reward modeling and constitutional AI (Bai et al., 2022), but relying on uncalibrated humans or LLMs for feedback (Lee et al., 2023; Gao et al., 2024) on subjective tasks like writing can lead to reward hacking (Pan et al., 2024) and alignment issues. Recent work from Panickssery et al. (2024) shows the self-aggrandizing nature of LLMs, as evidenced in Table 3 where they prefer their own writing over Nobel Prize winners' work. For the purpose of this paper we define good writing quality as writing that doesn't contain disproportionate amount of peculiar words or phrases, has fewer cliches or hackneyed expressions, is not unnecessarily ornamental as well as doesn't have a overly saccharine and polished tone or voice." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 453, + 506, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 453, + 506, + 632 + ], + "spans": [ + { + "bbox": [ + 104, + 453, + 506, + 632 + ], + "type": "text", + "content": "The surge in AI writing assistance demands urgent alignment of AI-generated text with human preferences. Recent work from Gooding et al. (2025) show how LLMs struggle to select high-quality writing actions as judged by human experts, often treating suboptimal and optimal interventions as equally acceptable. They highlight the need for models to better assess the quality and impact of suggested actions, both during generation and across multi-step refinement. Binary preference feedback between paired examples is the most common alignment method for LLMs (Christiano et al., 2017), but it has a significant drawback. The paired outputs may differ in several ways and could be equally worse in terms of quality (Casper et al., 2023; Lambert & Calandra, 2023).1 Recent work from Chakrabarty et al. (2024b) shows how identifying and editing problematic response segments effectively improves AI alignment. This also reflects the Reviewing phase in the cognitive process model of writing (Hayes et al., 1987), where humans evaluate and revise text. They release LAMP (Language model Authored, Manually Polished), a corpus of " + }, + { + "bbox": [ + 104, + 453, + 506, + 632 + ], + "type": "inline_equation", + "content": "1282 < AI - generated" + }, + { + "bbox": [ + 104, + 453, + 506, + 632 + ], + "type": "text", + "content": ", Expert - Edited > pairs with implicit preference (edited > original_draft) to improve AI writing (see Table 4 in Appendix A.1). Additionally, each paragraph pair includes normalized scores (1-10) reflecting writing quality before and after editing." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 635, + 506, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 635, + 506, + 704 + ], + "spans": [ + { + "bbox": [ + 104, + 635, + 506, + 704 + ], + "type": "text", + "content": "Our work builds on LAMP data to train Writing Quality Reward Models (WQRM) across multiple model families using pairwise and scalar rewards. To evaluate WQRM, we introduce the Writing Quality Benchmark (WQ), consolidating five datasets that contrast Human-Human, Human-AI, and AI-AI writing pairs reflecting real world applications. In addition to standard reward models we also implement a teacher-student knowledge distillation approach, fine-tuning open-weight models (students) on LAMP with silver rationales generated from" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "text", + "content": "1Forcing annotators to choose between two undesirable outputs doesn't improve alignment. In the current design of RLHF, annotators are not allowed to pick neither" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 138 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 138 + ], + "type": "text", + "content": "stronger LLMs (teachers) (Section 3). This framework enhances faithfulness and robustness by transferring reasoning abilities from powerful teachers to efficient students. Empirical results show our LAMP-trained reward models outperform proprietary LLMs like GPT-4o, o1 (OpenAI, 2024), open-weight models like DeepSeek-R1 (Guo et al., 2025), and competitive Reward-Bench models like Skywork-Reward (Liu et al., 2024)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "spans": [ + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "text", + "content": "Next, we use expert edit interaction traces from LAMP data (Figure 6) to train a Chain-of-Thought editing model that identifies problematic spans, suggests edits, and combines them into a paragraph with improved writing (Section 5). Following recent work that leverages additional inference-time computation to improve LLM performance (Hosseini et al., 2024; Lightman et al., 2023; Wu et al., 2024; Ji et al., 2025; Snell et al., 2024), we employ best-of-N-sampling (Chow et al., 2024; Cobbe et al., 2021; Lightman et al., 2023) to select the best candidate from multiple edited paragraphs based on our reward model. Expert evaluation on LLM-generated responses based on writing instructions across fiction, nonfiction, and marketing confirms the correlation between expert judgment and our reward models. Experts and our best WQRM align in terms of preferences " + }, + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "inline_equation", + "content": "66\\%" + }, + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "text", + "content": " overall, and " + }, + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "inline_equation", + "content": "72.2\\%" + }, + { + "bbox": [ + 104, + 143, + 506, + 287 + ], + "type": "text", + "content": " when the reward gap is larger than 1 point. Our results represent progress toward aligning LLMs with expert humans on subjective writing tasks, one of the most common use cases of AI (Handa et al.). As summarized in Figure 1:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 289, + 506, + 428 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 105, + 289, + 504, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 289, + 504, + 323 + ], + "spans": [ + { + "bbox": [ + 105, + 289, + 504, + 323 + ], + "type": "text", + "content": "- We introduce the Writing Quality Benchmark (WQ) by consolidating five writing preference datasets and show how state-of-the-art LLMs and reward models perform close to random chance on writing quality assessment," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 324, + 504, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 324, + 504, + 358 + ], + "spans": [ + { + "bbox": [ + 105, + 324, + 504, + 358 + ], + "type": "text", + "content": "- We leverage implicit preference from edits to train competitive open weight reward models (WQRM) of different sizes for judging writing quality. Our reward models achieve top performance on the WQ benchmark," + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 359, + 506, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 359, + 506, + 428 + ], + "spans": [ + { + "bbox": [ + 105, + 359, + 506, + 428 + ], + "type": "text", + "content": "- We use interaction traces from fine-grained expert edits to train an editing pipeline that improves writing quality. We further leverage additional test-time compute to generate and rank multiple edited paragraphs, allowing us to select higher-quality outputs from an initial draft based on our reward model. Evaluation with professionals confirms that the reward aligns with expert judgments and opens up possible avenues for improving alignment in AI-assisted writing.[2]" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 443, + 201, + 455 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 443, + 201, + 455 + ], + "spans": [ + { + "bbox": [ + 105, + 443, + 201, + 455 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 468, + 506, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 468, + 506, + 666 + ], + "spans": [ + { + "bbox": [ + 104, + 468, + 506, + 666 + ], + "type": "text", + "content": "Widespread adoption and Limitations of AI assistance in writing Large language models have rapidly transformed written communications across multiple sectors, with approximately " + }, + { + "bbox": [ + 104, + 468, + 506, + 666 + ], + "type": "inline_equation", + "content": "10 - 24\\%" + }, + { + "bbox": [ + 104, + 468, + 506, + 666 + ], + "type": "text", + "content": " of text in consumer complaints, corporate communications, job postings, and UN press releases being LLM-assisted by late 2024 (Liang et al., 2025). These adoption rates have stabilized after an initial surge following ChatGPT's release. Outside of technical writing LLMs are also being used for scientific (Liang et al., 2024; Gero et al., 2022) as well as creative writing (Chakrabarty et al., 2024c; Ippolito et al., 2022; Yuan et al., 2022; Mirowski et al., 2023; 2024). Aligning language models with human preferences (Ouyang et al., 2022) has enabled their integration into writing tools such as Google's WorkSpace Labs, Grammarly, and Sudowrite. Despite productivity gains in using AI for writing, several limitations remain with AI-generated text. Prior work (Chakrabarty et al., 2024a;c; Ippolito et al., 2022; Mirowski et al., 2023; Marco et al., 2024) has shown how AI-generated text is often rife with clichés, lacks nuance, subtext, and rhetorical complexity. Through use of syntactic templates Shaib et al. (2024) show the repetitiveness of AI-generated text in comparison to human-written references. More recently Russell et al. (2025) show that AI-generated text is most easily detectable by its characteristic vocabulary, followed by formulaic writing structures and lack of originality. Neither paraphrasing nor humanization effectively removes all of these signatures." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 668, + 507, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 668, + 507, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 668, + 507, + 715 + ], + "type": "text", + "content": "Human-AI Alignment in Writing Recent work from Lee et al. (2024) highlights how LLMs have transformed the processes behind writing, establishing new criteria for future AI writing assistants. Anderson et al. (2024) and Laban et al. (2023) discovered that Large Language Models assisted users in generating more detailed ideas. However, these studies also" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 501, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 501, + 731 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 501, + 731 + ], + "type": "text", + "content": "2Our code, data and models are available at https://github.com/salesforce/creativity_eval/" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 247 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 247 + ], + "type": "text", + "content": "found that the outputs were less semantically distinct across different users (Padmakumar & He, 2023), and participants reported feeling diminished responsibility for the ideas they produced. In a similar vein Li et al. (2024) explores people's attitudes toward AI writing assistants, finding that while many value and prefer AI assistance for creative tasks and productivity gains, this comes with potential drawbacks in reduced accountability and diversity in writing outcomes. Liu et al. (2025) introduce eRevise+RF, an automated writing evaluation system designed to assess student essay revisions and offer formative feedback. The system was deployed with 406 students across three schools, demonstrating effectiveness in evaluating evidence usage, identifying revisions, and determining revision success. Prior work from Pan et al. (2024) shows language models can enhance outputs through feedback. However, iterative self-refinement using another language model as evaluator may lead to reward hacking, where models exploit evaluator weaknesses. Chakrabarty et al. (2024b) shows how LLMs across different model families share common writing idiosyncrasies and how automatically editing these idiosyncrasies improves alignment, based on a behavioral study with 12 writers." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 252, + 504, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 252, + 504, + 287 + ], + "spans": [ + { + "bbox": [ + 104, + 252, + 504, + 287 + ], + "type": "text", + "content": "Unlike prior work that has focused either on detecting/addressing issues in AI writing our work introduces Writing Quality Reward Models (WQRMs) trained on expert edits that outperform state-of-the-art LLMs on a Writing Quality benchmark." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 301, + 302, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 301, + 302, + 316 + ], + "spans": [ + { + "bbox": [ + 105, + 301, + 302, + 316 + ], + "type": "text", + "content": "3 Writing Quality Reward Models" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 106, + 317, + 304, + 439 + ], + "blocks": [ + { + "bbox": [ + 106, + 317, + 304, + 439 + ], + "lines": [ + { + "bbox": [ + 106, + 317, + 304, + 439 + ], + "spans": [ + { + "bbox": [ + 106, + 317, + 304, + 439 + ], + "type": "image", + "image_path": "3ec93e4d1e723a8af4314b35f784d413ae1a9eefe64cfe4cb04e3f8df32e3b73.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 449, + 307, + 483 + ], + "lines": [ + { + "bbox": [ + 104, + 449, + 307, + 483 + ], + "spans": [ + { + "bbox": [ + 104, + 449, + 307, + 483 + ], + "type": "text", + "content": "Figure 2: Transforming LAMP annotations into classification and regression data points used during fine-tuning of WQRM models." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 312, + 302, + 506, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 302, + 506, + 501 + ], + "spans": [ + { + "bbox": [ + 312, + 302, + 506, + 501 + ], + "type": "text", + "content": "We rely on the LAMP (Language model Authored, Manually Polished) corpus from Chakrabarty et al. (2024b) to train reward models. As illustrated in Figure 2, each sample in LAMP consists of a writing instruction and two paragraphs that match this instruction. The paragraphs in LAMP range from 150 to 400 words, and span across fiction and non-fiction. Table 4 in Appendix A.1 shows a sample from LAMP, highlighting the edits implemented by an expert to improve writing quality. We use three methods to transform LAMP samples into training and validation data points for our models: pairwise (P), scalar (R), and combined (PR). With the P method, each data point presents two paragraphs as input (1 and 2) and requires a binary classification output" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "spans": [ + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "text", + "content": "indicating which paragraph has higher writing quality (i.e., the output is 1 or 2). Each LAMP sample is duplicated into two P data points by considering both paragraph orders (AI-generated, Expert-Edited " + }, + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "text", + "content": " 2) and (Expert-Edited, AI-generated " + }, + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 104, + 500, + 506, + 644 + ], + "type": "text", + "content": " 1). With the R method, each data point takes a single paragraph as input and outputs a regression value predicting the quality score of that paragraph. Since each LAMP sample contains two paragraphs (before and after edit), it generates two R data points. The PR method combines both approaches, yielding four data points per LAMP sample (two from P and two from R). There are a total of 1,282 samples in LAMP, and we follow the author's split divisions of 1,000 training, 67 validation, and 215 test samples. Applying the data transformation described above, the P, R, and PR variants of the training data we obtain consist of 2,000, 2,000, and 4,000 training data points, respectively. For our experiments, we trained both generative LLMs (Llama3.1 (Dubey et al., 2024)) and encoder-only models (ModernBert (Warner et al., 2024))." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": "Encoder-Only WQRM We follow the standard approach introduced in the original BERT paper (Devlin et al., 2019) to add and finetune two task-specific heads to a ModernBERT-Large model (Warner et al., 2024). The input data points contain either one paragraph (for R data points) or two paragraphs (for P data points), which are encoded jointly with a pre-defined separator token when needed. For each paragraph, we compute a \"paragraph vector\" by pooling the last layer's activations across all tokens in that paragraph. These paragraph vectors serve as input to either a regression (R) or classification (P) head. The" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 215 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 215 + ], + "type": "text", + "content": "regression head transforms the vector through a learned linear projection from the model's inner dimension to a scalar, followed by a scaled sigmoid to align with the 1-10 score range. The classification head is aparametric, using a cosine similarity operation between the two paragraph vectors. We use mean-squared error loss for R data points and cross entropy for P data points. Following convention for encoder-only models, we finetune the entire model's weights (Devlin et al., 2019). We selected ModernBERT-Large, the largest available model, for our experiments. We fine-tuned three variants: MBERT-WQRM-P, MBERT-WQRM-R, and MBERT-WQRM-PR, each on their corresponding data variants. Hyperparameters, including learning rate and number of epochs, were optimized by minimizing validation loss. PR models can be used in either P- or R-mode at test-time. Initial evaluation indicated that PR models achieve higher performance in R-mode, and as such we used all PR models in R-mode by default during evaluation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 217, + 506, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 217, + 506, + 352 + ], + "spans": [ + { + "bbox": [ + 104, + 217, + 506, + 352 + ], + "type": "text", + "content": "Generative WQRM We finetune generative transformer architectures by converting classification and regression tasks to sequence-to-sequence problems using JSON output format (Table 5). We employ QLora (Dettmers et al., 2023) parameter-efficient tuning with FSDP (Zhao et al., 2023) and cross-entropy loss. Generative methods can produce natural-language rationales alongside predictions for interpretability. Wiegrefe et al. (2020) demonstrated label-rationale association as essential for response faithfulness, while (Ludan et al., 2023; Hase & Bansal, 2021) argued for incorporating explanations in model input/output to improve robustness against spurious cues. Since LAMP lacks expert rationales, we augment it with LLM-generated silver rationales. We collected five examples from professional writers showing either paragraph strength contrasts (P-style) or holistic critiques/praise (R-style), instructing them to cite specific excerpts. These expert rationales serve as demonstrations for Claude3.5 Sonnet3 to generate rationales (examples in Table 6, Appendix A.3)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "spans": [ + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": "The rationale augmentation is then used in two variants, either providing the rationales on the input " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "(\\mathrm{IR}\\rightarrow \\mathrm{O})" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": ", or requiring the generative model to produce the rationale as part of its output " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "(\\mathrm{I}\\rightarrow \\mathrm{RO})" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": ". We note that rationales are not available at test-time, and are only included during training as an augmentation technique. We finetune a total of seven variants, all based on LLama 3.1 70b model: Llama-WQRM-P, Llama-WQRM-R, Llama-WQRM-PR, Llama-WQRM-P-IR " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "\\rightarrow \\mathrm{O}" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": " and Llama-WQRM-P-I " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "\\rightarrow \\mathrm{RO}" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": ", Llama-WQRM-PR-IR " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "\\rightarrow \\mathrm{O}" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": " and Llama-WQRM-PR-I " + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "inline_equation", + "content": "\\rightarrow \\mathrm{RO}" + }, + { + "bbox": [ + 104, + 355, + 506, + 445 + ], + "type": "text", + "content": ", based on different versions of the training data, and tune hyperparameters by minimizing validation loss." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 453, + 302, + 467 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 453, + 302, + 467 + ], + "spans": [ + { + "bbox": [ + 104, + 453, + 302, + 467 + ], + "type": "text", + "content": "4 The Writing Quality Benchmark" + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 106, + 479, + 305, + 544 + ], + "blocks": [ + { + "bbox": [ + 106, + 479, + 305, + 544 + ], + "lines": [ + { + "bbox": [ + 106, + 479, + 305, + 544 + ], + "spans": [ + { + "bbox": [ + 106, + 479, + 305, + 544 + ], + "type": "table", + "html": "
DatasetPair OriginAnnotatorLenN
Art or Artifice\\( \\text{或或}/\\text{或或} \\)Expert1.5-3k144
LAMP-test\\( \\text{或或}/\\text{或或} \\)Expert200-4001,206
Style Mimic\\( \\text{或或} \\)Expert200-400300
Synth. Mirror\\( \\text{或或} \\)Expert200-4001,120
LM Arena\\( \\text{或或} \\)Crowd200-2.5k1,959
", + "image_path": "829672b917337c9a73eb40af91f6ec69e742a115a2d9e839b5749586c9021915.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 552, + 308, + 612 + ], + "lines": [ + { + "bbox": [ + 104, + 552, + 308, + 612 + ], + "spans": [ + { + "bbox": [ + 104, + 552, + 308, + 612 + ], + "type": "text", + "content": "Table 1: Writing Quality benchmark composition. Pair Origin: evaluated pairs are AI-generated (♂) or human-written (♀); Len: #words in evaluated responses; N: total evaluation pairs contributed to the benchmark." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 312, + 465, + 506, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 465, + 506, + 631 + ], + "spans": [ + { + "bbox": [ + 312, + 465, + 506, + 631 + ], + "type": "text", + "content": "We create the first benchmark centered on the task of writing quality assessment by collecting five relevant datasets and standardizing their data formats into a pairwise preference task. The task in the benchmark consists of a writing instruction and two writing responses, with a binary label indicating which of the two responses has higher writing quality. Table 1 lists the five datasets we selected for the benchmark, along with key properties of each dataset that lead to a comprehensive benchmark for writing quality. We include three datasets that involve AI-AI comparisons (Art or Artifice (Chakrabarty et al., 2024a), LAMP-test" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 630, + 504, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 630, + 504, + 687 + ], + "spans": [ + { + "bbox": [ + 104, + 630, + 504, + 687 + ], + "type": "text", + "content": "(Chakrabarty et al., 2024b), and LM Arena (Zheng et al., 2023)), three that involve AI-Human comparisons (Art or Artifice, LAMP-test, and Synthetic Mirror), and one that involves Human-Human comparisons (Style Mimic) (Anonymous, 2025). This diversity ensures that models that perform well on the benchmark can judge writing quality regardless of whether the response was LLM generated or human-written." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "type": "text", + "content": "To assess writing quality prior work has argued for evaluation by professionals (ones with writing experience). Nevertheless, some writing quality preference datasets are based on" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 443, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 443, + 733 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 443, + 733 + ], + "type": "text", + "content": "3Considered a top-performing model for writing tasks at the time of experiments." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "type": "text", + "content": "crowd-sourced judgments. We include four datasets based on expert judgments and one dataset based on crowd-sourced annotation (LM Arena) to represent both perspectives in the benchmark. Finally, we selected two datasets with long responses (Art or Artifice, LM Arena) and three with shorter responses ranging from 200-400 words, ensuring that models that perform well on the benchmark are capable of judging writing quality irrespective of length. Appendix A.4 details the procedure we followed to extract and standardize each dataset. Appendix A.5 provides an analysis we conducted on the relative difficulty of each dataset in the benchmark, finding that the five selected datasets provide a breadth of coverage in terms of difficulty." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 145, + 194, + 463, + 399 + ], + "blocks": [ + { + "bbox": [ + 265, + 182, + 392, + 194 + ], + "lines": [ + { + "bbox": [ + 265, + 182, + 392, + 194 + ], + "spans": [ + { + "bbox": [ + 265, + 182, + 392, + 194 + ], + "type": "text", + "content": "Writing Quality Benchmark" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 145, + 194, + 463, + 399 + ], + "lines": [ + { + "bbox": [ + 145, + 194, + 463, + 399 + ], + "spans": [ + { + "bbox": [ + 145, + 194, + 463, + 399 + ], + "type": "table", + "html": "
ModelSynthetic MirrorArt or ArtificeLAMPStyle MimicLM ArenaOverall (↑) All
MIRRMIRR/MIRRMIRR/MIRRMIRRMIRRAll
MBERT-WQRM-PR99.880.672.667.351.074.3
MBERT-WQRM-R100.080.676.159.351.073.4
MBERT-WQRM-P99.554.271.267.046.867.7
Llama3.1 - P - IR → O100.080.574.943.052.870.2
Llama3.1 - PR - IR → O99.669.473.754.350.169.4
Llama3.1 - PR - I → OR99.176.371.742.655.268.9
Llama3.1 - P - I → OR99.975.174.138.649.167.3
Llama3.1 (70b) - PR94.852.071.340.644.360.6
Llama3.1 (70b) - P88.145.171.735.647.757.6
Llama3.1 (70b) - R44.850.040.350.054.347.9
Pangram100.072.656.547.348.465.0
O367.785.441.467.559.664.3
Skywork-8B-v0.290.368.154.234.055.860.5
GPT-4o (5FS)39.568.840.367.355.554.3
O125.867.439.868.756.751.7
DeepSeek-r131.554.939.247.357.046.0
GPT-4o7.556.237.847.755.440.9
", + "image_path": "3d68dc97a876f3af5134e0b6f154411a088aeaaa45f20a7956ed0e3b8a5c7524.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 407, + 506, + 433 + ], + "lines": [ + { + "bbox": [ + 104, + 407, + 506, + 433 + ], + "spans": [ + { + "bbox": [ + 104, + 407, + 506, + 433 + ], + "type": "text", + "content": "Table 2: Writing Quality Benchmark results. We evaluate zero-shot and few-shot LLMs, generic reward models, AI-detection models, and our fine-tuned models." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 441, + 263, + 454 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 441, + 263, + 454 + ], + "spans": [ + { + "bbox": [ + 105, + 441, + 263, + 454 + ], + "type": "text", + "content": "4.1 Experimental Results on WQ" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "spans": [ + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "text", + "content": "Our experiments on the WQ benchmark include four classes of models. First, Zero-Shot (ZS) and Few-Shot (FS) methods with top-performing instruction-tuned LLMs. We included both non-reasoning (GPT-4o) and reasoning models (Deepseek-R1, O1). Second, a top-performing generic reward model - SkyWork-8b-v0.2 - based on results on the RewardBench leaderboard (Lambert et al., 2024). Third, we include the Pangram AI-detector " + }, + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "inline_equation", + "content": "^4" + }, + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "text", + "content": ", accessed through API. Finally, the trained WQRM models in generative and encoder-only settings as described in Section 3. Models that can produce pairwise judgments (such as SkyWork or WQRM-P models) were used as is, but for models that produce scalar rewards (WQRM-R, Pangram), a scalar reward was computed for each response, and inequality was applied to emit a pairwise preference. Scalar rewards can theoretically lead to a tie (a score difference of less than an epsilon like 0.001), but we observe few of these in practice (less than " + }, + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "inline_equation", + "content": "0.1\\%" + }, + { + "bbox": [ + 104, + 462, + 506, + 597 + ], + "type": "text", + "content": " of pairs), and resolve those randomly." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 601, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 601, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 601, + 506, + 715 + ], + "type": "text", + "content": "Experimental results are summarized in Table 2. First, we find that all the LLMs used in zero-shot settings perform below or a few percentage points above a random baseline of " + }, + { + "bbox": [ + 104, + 601, + 506, + 715 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 104, + 601, + 506, + 715 + ], + "type": "text", + "content": ". The performance is particularly low on portions of WQ that involve AI-human preference pairs. This confirms prior findings that LLMs used in LLM-as-a-judge settings tend to prefer AI-generation over human-writing (Panickssery et al., 2024). The O1 and R1 reasoning models do not significantly outperform their non-reasoning counterparts, indicating that out-of-the-box COT-style reasoning, useful for math or coding tasks doesn't improve writing quality assessment. O3 shows improvement on Synthetic Mirror and Art or Artifice showing some promise. Finally, adding five few-shot examples to GPT-4o does help improve performance from 40.9 to 54.3, however further experiments with additional" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 317, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 317, + 731 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 317, + 731 + ], + "type": "text", + "content": "4https://www.pangram.com/dashboard?type " + }, + { + "bbox": [ + 116, + 720, + 317, + 731 + ], + "type": "inline_equation", + "content": "\\equiv" + }, + { + "bbox": [ + 116, + 720, + 317, + 731 + ], + "type": "text", + "content": " text" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 106 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 106 + ], + "type": "text", + "content": "in-context examples did not lead to further gains, confirming that few-shot examples in the instruction are not sufficient to achieve strong performance on WQ." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 110, + 506, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 110, + 506, + 166 + ], + "spans": [ + { + "bbox": [ + 104, + 110, + 506, + 166 + ], + "type": "text", + "content": "The generic reward model – Skywork-8b-v0.2 – achieves an overall accuracy of 60.5, with strong performance on Synthetic Mirror and Art or Artifice. Though better than random, the overall performance is much lower than the " + }, + { + "bbox": [ + 104, + 110, + 506, + 166 + ], + "type": "inline_equation", + "content": "93\\%" + }, + { + "bbox": [ + 104, + 110, + 506, + 166 + ], + "type": "text", + "content": " performance the model achieves on RewardBench, indicating that reward models geared for instruction-following evaluation are not effective at writing quality assessment out-of-the-box." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 171, + 506, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 171, + 506, + 239 + ], + "spans": [ + { + "bbox": [ + 104, + 171, + 506, + 239 + ], + "type": "text", + "content": "The Pangram AI detection system achieves a total performance of " + }, + { + "bbox": [ + 104, + 171, + 506, + 239 + ], + "type": "inline_equation", + "content": "65.0\\%" + }, + { + "bbox": [ + 104, + 171, + 506, + 239 + ], + "type": "text", + "content": ", the top performance for untrained models. Pangram achieves near-perfect performance on Synthetic Mirror and the AI-Human pairs of Art or Artifice. On samples that do not involve distinguishing between AI and human text, Pangram achieves near-random performance. In other words, AI-detection tools only correlate with writing quality assessment when an AI-generated text is judged to be worse than human-written text." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "spans": [ + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "text", + "content": "Finally, the trained WQRM models achieve top-performance on the benchmark. The Llama-based models achieve their strongest performance in the " + }, + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "inline_equation", + "content": "\\mathrm{IR} \\rightarrow \\mathrm{O}" + }, + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "text", + "content": " settings, confirming that augmenting the training data with rationales is beneficial, with models that can generate rationales alongside their prediction. The ModernBERT-based models achieve the highest overall accuracy of " + }, + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "inline_equation", + "content": "74.3\\%" + }, + { + "bbox": [ + 104, + 243, + 506, + 332 + ], + "type": "text", + "content": ", with the PR variant outperforming the P and R models, indicating that pairwise and reward-based training can be complementary. While its surprising to see a smaller model outperform Llama3.1-70B it could be due to PEFT or the way the loss function is optimized. Future work can focus on bridging this gap." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 336, + 507, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 336, + 507, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 336, + 507, + 437 + ], + "type": "text", + "content": "We observe that generative WQRM models perform best in P-mode, whereas encoder models perform best in R-mode. We emit a hypothesis for this reversal of relationship, related to the choice of loss. The generative models (Llama) are trained with a sequence-to-sequence loss, whereas the encoder-only models (MBert) are trained with custom losses (pairwise classification for P, mean-squared error for R). In other words, LLama training on the reward-based data is more similar to 10-way classification than actual score regression, whereas the MBert training makes better use of the reward-based data. This leads the MBERT-R models to outperform MBert-P models, whereas the reverse is true for the LLama models, as they are not able to properly take advantage of the R-based data." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "text", + "content": "Looking at performance on individual datasets, Synthetic Mirror is the the easiest dataset, with eight models achieving near-perfect performance. Some models achieve " + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "inline_equation", + "content": "80\\%+" + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "text", + "content": " performance on Art or Artifice, indicating that long-context evaluation is challenging but achievable. Style Mimic and LM Arena are the most challenging in terms of accuracy. Style Mimic is likely challenging as it is the only dataset that involves comparisons that do not involve AI-generated text, but two relatively high-quality human-written candidates. LM Arena is challenging to all systems, with top performance at " + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "inline_equation", + "content": "57\\%" + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "text", + "content": " by Deepseek-R1. This low performance could be due to the crowd-sourced nature of LM Arena, with the dataset representing much broader and potentially noisier judgments. Though our trained WQRM models outperform baselines by almost " + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "inline_equation", + "content": "10\\%+" + }, + { + "bbox": [ + 104, + 441, + 507, + 586 + ], + "type": "text", + "content": " overall, there remains wide room for improvement: writing quality assessment remains an open challenge to the community. Additional analysis in upcoming Sections refers to the top-performing model - MBERT-WQRM-PR - simply as WQRM." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 593, + 354, + 608 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 593, + 354, + 608 + ], + "spans": [ + { + "bbox": [ + 105, + 593, + 354, + 608 + ], + "type": "text", + "content": "5 Editing Pipeline with Test-Time Compute" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 618, + 506, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 618, + 506, + 674 + ], + "spans": [ + { + "bbox": [ + 104, + 618, + 506, + 674 + ], + "type": "text", + "content": "To better understand the practical value of the WQRM model, we integrate it into a text-editing pipeline to produce LLM-generated candidates of higher-quality according to WQRM scores. We first introduce the editing pipeline and candidate generation procedure, and then describe the large-scale preference annotation we conducted with professional writers to validate WQRM as part of an editing pipeline." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 677, + 331, + 690 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 331, + 690 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 331, + 690 + ], + "type": "text", + "content": "5.1 Generating edits via Supervised Finetuning" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 698, + 507, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 507, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 507, + 734 + ], + "type": "text", + "content": "Prior work from Chakrabarty et al. (2024b) shows experimentally that LLMs' text idiosyncrasies (cliches, redundancy, lack of subtext, etc.) can be mitigated through self-editing in an in-context setup. Borrowing motivation from them we teach LLMs how to improve" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 228 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 228 + ], + "type": "text", + "content": "their response via edits. Figure 6 illustrates the three components of the editing pipeline. Given a first draft response to an instruction from any given LLM, the first step consists of identifying and listing idiosyncrasies: spans in the first draft that can be rephrased to improve overall writing quality. For each identified idiosyncrasy, a second stage consists in rewriting the idiosyncrasy. This is framed as an executable edit (Laban et al., 2023), where each edit consists of replacing an original string in a draft with an improved version. The third step simply executes all edits (by applying a series of string replace operations) to obtain the final edited draft. While Chakrabarty et al. (2024b) implemented this through prompt-chaining (Wu et al., 2022) with few-shot examples, we improved efficiency by supervised fine-tuning of GPT-4o and Llama3.1 70B based on the entire LAMP training set. The training input consists of the first draft alongside the entire edit interaction trace (detect, rewrite, execute) in a step-by-step chain of thought prompt, and the output is the edited paragraph. See Appendix A.7 for an example COT prompt." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 230, + 405, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 230, + 405, + 243 + ], + "spans": [ + { + "bbox": [ + 104, + 230, + 405, + 243 + ], + "type": "text", + "content": "5.2 Selecting edited response by leveraging Test-Time Compute" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 243, + 506, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 243, + 506, + 368 + ], + "spans": [ + { + "bbox": [ + 104, + 243, + 506, + 368 + ], + "type": "text", + "content": "Recent work from Snell et al. (2024) shows that test-time compute can be scaled optimally by using a reward model to search over the space of solutions. This approach typically involves generating multiple candidate responses and using a verifier to select an optimal response (Cobbe et al., 2021). The most popular technique to increase test-time compute is Best-of-N sampling also known as Rejection Sampling, in which N candidates are generated independently. The reward model is then used to score each candidate, and the top-scoring candidate is selected. While test-time scaling is effective for reasoning tasks, our work aims to measure whether it is a practical strategy to improve human-AI alignment in subjective tasks such as writing. Next we describe the validation study with experts to measure how well calibrated our WQRMs are to human judgment and whether additional test-time computation leads to meaningful improvements in AI writing quality." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 373, + 371, + 385 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 373, + 371, + 385 + ], + "spans": [ + { + "bbox": [ + 104, + 373, + 371, + 385 + ], + "type": "text", + "content": "6 How well calibrated are our reward models?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 389, + 506, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 389, + 506, + 582 + ], + "spans": [ + { + "bbox": [ + 104, + 389, + 506, + 582 + ], + "type": "text", + "content": "We generated 100 draft responses (50 GPT4-o, 50 Llama3.1 70B) based on 90 writing instructions spanning 3 domains: literary fiction, non-fiction, and product marketing. For literary fiction and non-fiction we create the instructions through instruction back-translation (Li et al., 2023) conditioned on expert-written paragraphs in Anonymous (2025) and news articles in the data from Russell et al. (2025). Marketing writing instructions were based on products recommended in WireCutter articles across the Home, Kitchen and Tech sections. The right portion of Figure 1 summarizes the process we follow to leverage test-time compute. Specifically, we obtain a first draft from a LLM (GPT4o or Llama3.1 70B) followed by drawing " + }, + { + "bbox": [ + 104, + 389, + 506, + 582 + ], + "type": "inline_equation", + "content": "N = 20" + }, + { + "bbox": [ + 104, + 389, + 506, + 582 + ], + "type": "text", + "content": " candidate edited responses from the respective SFT model (Section 5.1)6, and score each candidate with the WQRM model. We filter out any candidate that scores lower than the first drafts, and then form response triplets by selecting the first draft, a randomly-selected edited response (random edit), and the Best-of-N candidate response according to WQRM (Best Edit) (See example triplet in Table 9). We recruited 9 professional writers through mailing lists from top MFA programs in the US. They were asked to rank three responses based on its overall quality (See Figure 8 for interface). Each response triplet were annotated by three experts, which we aggregated into a majority rank. Participants completed annotation in batches of 10 triplets at a time, and were paid $100 per batch." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 585, + 201, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 585, + 201, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 585, + 201, + 597 + ], + "type": "text", + "content": "6.1 Study Findings" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 605, + 506, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 506, + 673 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 506, + 673 + ], + "type": "text", + "content": "Figure 3 summarizes findings from the expert annotation. In Figure 3a, we plot the distribution of rankings across all triplets. Best Edit candidates were most preferred overall with an average rank of 1.58, followed by random edit (2.09) and first draft (2.26). The breakdown of rankings across domains (fiction, non-fiction, marketing) or LLM (GPT-4o vs. Llama 3.1) is presented in Appendix A.8. In short, Best Edit achieves the top rank in all conditions, confirming the generalization of WQRM scores across conditions." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 677, + 506, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 506, + 702 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 506, + 702 + ], + "type": "text", + "content": "If the reward model is well-calibrated, the WQRM score gap between responses should indicate their qualitative difference. For example, responses scoring 4 and 6 should have a larger" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 709, + 284, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 709, + 284, + 721 + ], + "spans": [ + { + "bbox": [ + 116, + 709, + 284, + 721 + ], + "type": "text", + "content": "5https://www.nytimes.com/wirecutter/" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 118, + 721, + 333, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 721, + 333, + 731 + ], + "spans": [ + { + "bbox": [ + 118, + 721, + 333, + 731 + ], + "type": "text", + "content": "6If first draft is from GPT4o we use GPT4o SFT model" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 83, + 271, + 190 + ], + "blocks": [ + { + "bbox": [ + 111, + 83, + 271, + 190 + ], + "lines": [ + { + "bbox": [ + 111, + 83, + 271, + 190 + ], + "spans": [ + { + "bbox": [ + 111, + 83, + 271, + 190 + ], + "type": "image", + "image_path": "1db88ddb7c3d6e64e6370de4840630adc086036f36b58562c5aefb632bacc535.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 127, + 197, + 253, + 208 + ], + "lines": [ + { + "bbox": [ + 127, + 197, + 253, + 208 + ], + "spans": [ + { + "bbox": [ + 127, + 197, + 253, + 208 + ], + "type": "text", + "content": "(a) Expert Ranking Distribution" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 278, + 82, + 387, + 190 + ], + "blocks": [ + { + "bbox": [ + 278, + 82, + 387, + 190 + ], + "lines": [ + { + "bbox": [ + 278, + 82, + 387, + 190 + ], + "spans": [ + { + "bbox": [ + 278, + 82, + 387, + 190 + ], + "type": "image", + "image_path": "70c90c2fb2fa11ab7926ba0d130319324863f723979571d891269e6977f87c7f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 287, + 197, + 379, + 209 + ], + "lines": [ + { + "bbox": [ + 287, + 197, + 379, + 209 + ], + "spans": [ + { + "bbox": [ + 287, + 197, + 379, + 209 + ], + "type": "text", + "content": "(b) Gap vs. Agreement" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 394, + 83, + 502, + 190 + ], + "blocks": [ + { + "bbox": [ + 394, + 83, + 502, + 190 + ], + "lines": [ + { + "bbox": [ + 394, + 83, + 502, + 190 + ], + "spans": [ + { + "bbox": [ + 394, + 83, + 502, + 190 + ], + "type": "image", + "image_path": "1bfddae7f9254bd920a3d04ad7dfbe9b3a91c4130fc9a7f98fbab4c9ad9d0f20.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 402, + 197, + 493, + 209 + ], + "lines": [ + { + "bbox": [ + 402, + 197, + 493, + 209 + ], + "spans": [ + { + "bbox": [ + 402, + 197, + 493, + 209 + ], + "type": "text", + "content": "(c) Sensitivity Analysis" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 109, + 274, + 303, + 441 + ], + "blocks": [ + { + "bbox": [ + 104, + 217, + 504, + 251 + ], + "lines": [ + { + "bbox": [ + 104, + 217, + 504, + 251 + ], + "spans": [ + { + "bbox": [ + 104, + 217, + 504, + 251 + ], + "type": "text", + "content": "Figure 3: Results and analysis of WQRM based: (a) distribution of preference based on 300 expert triplet rankings, (b) calibration between gap in WQRM scores and matching expert preference, and (c) applying experts edits gradually to a draft leads to gradual reward gains." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 109, + 274, + 303, + 441 + ], + "lines": [ + { + "bbox": [ + 109, + 274, + 303, + 441 + ], + "spans": [ + { + "bbox": [ + 109, + 274, + 303, + 441 + ], + "type": "image", + "image_path": "16c263323e5750f59850badc6d20ea1146d6c779a721a6434008265c6a6a5153.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 125, + 448, + 288, + 460 + ], + "lines": [ + { + "bbox": [ + 125, + 448, + 288, + 460 + ], + "spans": [ + { + "bbox": [ + 125, + 448, + 288, + 460 + ], + "type": "text", + "content": "(a) Less content detail in writing prompt" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 468, + 504, + 503 + ], + "lines": [ + { + "bbox": [ + 104, + 468, + 504, + 503 + ], + "spans": [ + { + "bbox": [ + 104, + 468, + 504, + 503 + ], + "type": "text", + "content": "Figure 4: Writing quality analysis of human-written and LLM-generated texts according to WQRM on (a) less and (b) more content detail in the writing prompt. Prompts with less content detail average 30 words, whereas prompts with more content detail average 180." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 322, + 275, + 503, + 441 + ], + "blocks": [ + { + "bbox": [ + 322, + 275, + 503, + 441 + ], + "lines": [ + { + "bbox": [ + 322, + 275, + 503, + 441 + ], + "spans": [ + { + "bbox": [ + 322, + 275, + 503, + 441 + ], + "type": "image", + "image_path": "9479f61330a0b135d11098f13649f4ebd8c3ea141f85b130c54586eaff7c65f7.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 329, + 448, + 495, + 460 + ], + "lines": [ + { + "bbox": [ + 329, + 448, + 495, + 460 + ], + "spans": [ + { + "bbox": [ + 329, + 448, + 495, + 460 + ], + "type": "text", + "content": "(b) More content detail in writing prompt" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "spans": [ + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "text", + "content": "quality gap than those scoring 4 and 4.5. To inspect WQRM calibration, we computed the WQRM gap between all annotated response pairs and plotted it against expert annotation agreement. As shown in Figure 3b, WQRM gap positively correlates with expert agreement: when responses differ by " + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "inline_equation", + "content": "\\leq 0.5" + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "text", + "content": " points, individual experts prefer the higher-scoring response only " + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "inline_equation", + "content": "55\\%" + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "text", + "content": " of the time. When the gap exceeds 3.0, this increases to " + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 104, + 517, + 506, + 618 + ], + "type": "text", + "content": ". Agreement with majority rank based on three expert annotations (green line) shows even stronger positive correlation. In short, we find evidence that WQRM is well-calibrated: a wider gap in scores between two responses is evidence that an expert (or group of experts) would be more likely to prefer the higher-scoring response over the lower-scoring response." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 622, + 506, + 742 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 622, + 506, + 742 + ], + "spans": [ + { + "bbox": [ + 104, + 622, + 506, + 742 + ], + "type": "text", + "content": "Besides calibration, we analyze the sensitivity of the WQRM model to minor edits and their impact on writing quality. The LAMP dataset consists of drafts that are edited by expert writers to improve writing, with samples comprising of eight edits per passage on average. We implement a gradual version of the LAMP-test set, where each expert edit is reversed, and we execute them one at a time, computing the WQRM score at each intermediate step. Results from the gradual LAMP-test are summarized in Figure 3c: each time an additional edit is implemented, the median WQRM score increases by 0.2, even though WQRM was not trained on intermediate responses and only saw samples where no edit or all edits have been applied. In summary, we find evidence that minor edits to a response will lead to small but significant changes in WQRM scores, indicative of a fine sensitivity of the reward model." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 80, + 350, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 80, + 350, + 95 + ], + "spans": [ + { + "bbox": [ + 104, + 80, + 350, + 95 + ], + "type": "text", + "content": "7 How does content affect writing quality?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 105, + 506, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 105, + 506, + 248 + ], + "spans": [ + { + "bbox": [ + 104, + 105, + 506, + 248 + ], + "type": "text", + "content": "Effectively judging writing quality impacts both understanding and improving LLM writing. Writing quality is however closely tied to content. Its known that LLMs struggle with novel ideas (content planning), making their writing appear trite. Even with detailed original content, they struggle to maintain good writing standards (avoiding clichés, revealing subtext, and introducing purple prose). To understand how content affects writing quality, we analyzed writing from several LLMs with and without detailed content. We used 50 writing instructions from Style Mimic data, creating two variants: a 30-word prompt with less detail (e.g., \"A family Christmas unfolds through emotional reflections on a father's new family, a daughter's excuse to stay behind, and the complex dynamics of grief and blended identities.\") and a 150-200 word detailed prompt (Table 10 in Appendix). Style Mimic provides an original excerpt from an award-winning author and an MFA student's attempt to mimic that style for each prompt. Each sample includes the detailed content used for 4b." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 254, + 506, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 254, + 506, + 376 + ], + "spans": [ + { + "bbox": [ + 104, + 254, + 506, + 376 + ], + "type": "text", + "content": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, we retrained a better calibrated reward model with few fully human written high quality text (See Appendix A.11 for more details). Figure 4a shows writing quality scores from the WQRM model when prompts lack detailed content. Award-winning authors achieve a median score of 8.9, while LLMs score 4.8-6.6 with much higher variance. Despite WQRM being trained only on AI-generated paragraphs edited by MFA students and relatively fewer human written samples, it scored 50 author-written texts higher than all LLMs, demonstrating model generalization. GPT4.5, though considered the best writing LLM, showed no quality advantage. The significant gap between awardwinning authors and LLMs shows that in the absence of original good-quality content, all LLMs are poor writers." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "text", + "content": "Figure 4b shows the writing quality of several LLMs leveraging the new WQRM model when detailed content is provided in the writing prompt. As a matter of fact the content detail is often " + }, + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "inline_equation", + "content": "0.5\\mathrm{x}" + }, + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "inline_equation", + "content": "0.75\\mathrm{x}" + }, + { + "bbox": [ + 104, + 380, + 506, + 525 + ], + "type": "text", + "content": " times the word count of the paragraph to be written/generated. Results with the detailed prompts provide additional insights. Though the variance remains high for all models, the more recent models (GPT-4.5, Claude 3.7-Sonnet, Gemini-2.5-pro) achieve improved writing quality given the more detailed prompts, achieving median scores of around 7.0. This should not be surprising as the amount of details provided in the writing prompt reduces the burden for originality and novelty from the LLM. What is particularly impressive here is paragraphs written by MFA students based on the same detailed content were rated significantly higher than all LLMs with a median of 8.6. The gap between award-winning authors and MFA students is narrow here, although the distribution from MFA students shows higher variance. Our results highlight that even when provided with very detailed original content, LLMs are far behind trained writers." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 529, + 507, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 529, + 507, + 575 + ], + "spans": [ + { + "bbox": [ + 104, + 529, + 507, + 575 + ], + "type": "text", + "content": "In summary, the analysis reveals that current LLMs are not yet capable of reliably generating high-quality creative writing at the level of an MFA student or award-winning author, especially when not spooned with original content. When provided with enough content detail in the prompt, the latest models show promise but still remain unreliable." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 590, + 189, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 590, + 189, + 601 + ], + "spans": [ + { + "bbox": [ + 104, + 590, + 189, + 601 + ], + "type": "text", + "content": "8 Conclusion" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 600, + 507, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 600, + 507, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 600, + 507, + 733 + ], + "type": "text", + "content": "In this work, we introduced the Writing Quality benchmark (WQ) and Writing Quality Reward Models (WQRM) to address the critical challenge of evaluating and improving the quality of AI-generated text. Our models trained on implicit preference via edits significantly outperform existing approaches, achieving " + }, + { + "bbox": [ + 104, + 600, + 507, + 733 + ], + "type": "inline_equation", + "content": "74\\%" + }, + { + "bbox": [ + 104, + 600, + 507, + 733 + ], + "type": "text", + "content": " accuracy on the WQ benchmark and demonstrating strong generalization across diverse writing contexts, as confirmed by a validation study involving 9 professional writers. Future work can address alternative test time computation such as long chains-of-thought (CoTs) enabling strategies like backtracking and correction of idiosyncrasies for improving writing. While our approach improves AI generated text by reducing idiosyncrasies, it is no where near expert quality writing. However, we hope that our contributions can serve as a catalyst for further research in writing quality assessment and the development of AI writing systems that are more aligned with human preferences." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "spans": [ + { + "bbox": [ + 106, + 81, + 168, + 93 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 101, + 505, + 731 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 107, + 101, + 505, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 101, + 505, + 135 + ], + "spans": [ + { + "bbox": [ + 107, + 101, + 505, + 135 + ], + "type": "text", + "content": "Barrett R Anderson, Josh Hemant Shah, and Max Kreminski. Homogenization effects of large language models on human creative ideation. In Proceedings of the 16th Conference on Creativity & Cognition, pp. 413-425, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 144, + 504, + 166 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 144, + 504, + 166 + ], + "spans": [ + { + "bbox": [ + 107, + 144, + 504, + 166 + ], + "type": "text", + "content": "Anonymous. Literary voice reproduction study mfa writers vs. llms in authorial style. In Under Submission, 2025." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 175, + 505, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 175, + 505, + 209 + ], + "spans": [ + { + "bbox": [ + 107, + 175, + 505, + 209 + ], + "type": "text", + "content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 217, + 505, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 217, + 505, + 239 + ], + "spans": [ + { + "bbox": [ + 107, + 217, + 505, + 239 + ], + "type": "text", + "content": "Deborah Brandt. The rise of writing: Redefining mass literacy. Cambridge University Press, 2014." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 249, + 505, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 249, + 505, + 293 + ], + "spans": [ + { + "bbox": [ + 107, + 249, + 505, + 293 + ], + "type": "text", + "content": "Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 302, + 505, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 302, + 505, + 357 + ], + "spans": [ + { + "bbox": [ + 107, + 302, + 505, + 357 + ], + "type": "text", + "content": "Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. Art or artifice? large language models and the false promise of creativity. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024a. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642731. URL https://doi.org/10.1145/3613904.3642731." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 366, + 505, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 366, + 505, + 400 + ], + "spans": [ + { + "bbox": [ + 107, + 366, + 505, + 400 + ], + "type": "text", + "content": "Tuhin Chakrabarty, Philippe Laban, and Chien-Sheng Wu. Can ai writing be salvaged? mitigating idiosyncrasies and improving human-ai alignment in the writing process through edits. arXiv preprint arXiv:2409.14509, 2024b." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 409, + 505, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 409, + 505, + 464 + ], + "spans": [ + { + "bbox": [ + 107, + 409, + 505, + 464 + ], + "type": "text", + "content": "Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, and Smaranda Muresan. Creativity support in the age of large language models: An empirical study involving professional writers. In Proceedings of the 16th Conference on Creativity & Cognition, C & C '24, pp. 132-155, New York, NY, USA, 2024c. Association for Computing Machinery. ISBN 9798400704857. doi: 10.1145/3635636.3656201. URL https://doi.org/10.1145/3635636.3656201." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 473, + 505, + 517 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 473, + 505, + 517 + ], + "spans": [ + { + "bbox": [ + 107, + 473, + 505, + 517 + ], + "type": "text", + "content": "Yinlam Chow, Guy Tennenholtz, Izzeddin Gur, Vincent Zhuang, Bo Dai, Sridhar Thiagarajan, Craig Boutilier, Rishabh Agarwal, Aviral Kumar, and Aleksandra Faust. Inference-aware fine-tuning for best-of-n sampling in large language models. arXiv preprint arXiv:2412.15287, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 527, + 505, + 561 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 527, + 505, + 561 + ], + "spans": [ + { + "bbox": [ + 107, + 527, + 505, + 561 + ], + "type": "text", + "content": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 570, + 505, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 570, + 505, + 604 + ], + "spans": [ + { + "bbox": [ + 107, + 570, + 505, + 604 + ], + "type": "text", + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168, 9, 2021." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 612, + 505, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 612, + 505, + 645 + ], + "spans": [ + { + "bbox": [ + 107, + 612, + 505, + 645 + ], + "type": "text", + "content": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in neural information processing systems, 36:10088-10115, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 654, + 505, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 654, + 505, + 731 + ], + "spans": [ + { + "bbox": [ + 107, + 654, + 505, + 731 + ], + "type": "text", + "content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423/." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 731 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "type": "text", + "content": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 506, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 506, + 146 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 506, + 146 + ], + "type": "text", + "content": "Bradley Emi and Max Spero. Technical report on the pangram ai-generated text classifier. arXiv preprint arXiv:2402.14873, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 152, + 504, + 177 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 152, + 504, + 177 + ], + "spans": [ + { + "bbox": [ + 107, + 152, + 504, + 177 + ], + "type": "text", + "content": "Yang Gao, Dana Alon, and Donald Metzler. Impact of preference noise on the alignment performance of generative language models. arXiv preprint arXiv:2404.09824, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 182, + 504, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 182, + 504, + 217 + ], + "spans": [ + { + "bbox": [ + 106, + 182, + 504, + 217 + ], + "type": "text", + "content": "Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. Sparks: Inspiration for science writing using language models. In Proceedings of the 2022 ACM Designing Interactive Systems Conference, pp. 1002-1019, 2022." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 223, + 504, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 223, + 504, + 246 + ], + "spans": [ + { + "bbox": [ + 107, + 223, + 504, + 246 + ], + "type": "text", + "content": "Sian Gooding, Lucia Lopez-Rivilla, and Edward Grefenstette. Writing as a testbed for open ended agents, 2025. URL https://arxiv.org/abs/2503.19711." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 253, + 505, + 287 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 253, + 505, + 287 + ], + "spans": [ + { + "bbox": [ + 107, + 253, + 505, + 287 + ], + "type": "text", + "content": "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 293, + 506, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 293, + 506, + 327 + ], + "spans": [ + { + "bbox": [ + 107, + 293, + 506, + 327 + ], + "type": "text", + "content": "Kunal Handa, Alex Tamkin, Miles McCain, Saffron Huang, Esin Durmus, Sarah Heck, Jared Mueller, Jerry Hong, Stuart Ritchie, Tim Belonax, et al. Which economic tasks are performed with ai? evidence from millions of claude conversations." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 334, + 506, + 367 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 334, + 506, + 367 + ], + "spans": [ + { + "bbox": [ + 107, + 334, + 506, + 367 + ], + "type": "text", + "content": "Peter Hase and Mohit Bansal. When can models learn from explanations? a formal framework for understanding the roles of explanation data. arXiv preprint arXiv:2102.02201, 2021." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 374, + 504, + 399 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 374, + 504, + 399 + ], + "spans": [ + { + "bbox": [ + 105, + 374, + 504, + 399 + ], + "type": "text", + "content": "John R Hayes, Linda Flower, Karen A Schriver, James Stratman, Linda Carey, et al. Cognitive processes in revision. Advances in applied psycholinguistics, 2:176-240, 1987." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 405, + 505, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 405, + 505, + 428 + ], + "spans": [ + { + "bbox": [ + 107, + 405, + 505, + 428 + ], + "type": "text", + "content": "John Herrman. Is that ai? or does it just suck? New York Magazine, 2024a. URL https://nymag.com/intelligencer/article/is-that-ai-or-does-it-just-suck.html." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 435, + 505, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 435, + 505, + 468 + ], + "spans": [ + { + "bbox": [ + 107, + 435, + 505, + 468 + ], + "type": "text", + "content": "John Herrman. The internet's ai slop problem is only going to get worse. New York Magazine - Intelligencer, 2024b. URL https://nymag.com/intelligencer/article/ai-generated-content-online-slop-spam.html. Accessed: 2025-03-06." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 475, + 504, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 475, + 504, + 509 + ], + "spans": [ + { + "bbox": [ + 107, + 475, + 504, + 509 + ], + "type": "text", + "content": "Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 516, + 504, + 549 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 516, + 504, + 549 + ], + "spans": [ + { + "bbox": [ + 107, + 516, + 504, + 549 + ], + "type": "text", + "content": "Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. Creative writing with an ai-powered writing assistant: Perspectives from professional writers. arXiv preprint arXiv:2211.05030, 2022." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 557, + 506, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 557, + 506, + 590 + ], + "spans": [ + { + "bbox": [ + 107, + 557, + 506, + 590 + ], + "type": "text", + "content": "Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Jia Xu, Linjian Mo, and Min Zhang. Test-time computing: from system-1 thinking to system-2 thinking. arXiv preprint arXiv:2501.02497, 2025." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 597, + 506, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 597, + 506, + 620 + ], + "spans": [ + { + "bbox": [ + 107, + 597, + 506, + 620 + ], + "type": "text", + "content": "Kate Knibbs. Confessions of an ai clickbait kingpin. Wired, 2024a. URL https://www.wired.com/story/confessions-of-an-ai-clickbait-kingpin/. Accessed: 2025-03-07." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 628, + 506, + 660 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 628, + 506, + 660 + ], + "spans": [ + { + "bbox": [ + 107, + 628, + 506, + 660 + ], + "type": "text", + "content": "Kate Knibbs. Scammy ai-generated books are flooding amazon. Wired, 2024b. URL https:// www.wired.com/story/scammy-ai-generated-books-flooding-amazon/. Accessed: 2025- 03-07." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 107, + 668, + 504, + 691 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 668, + 504, + 691 + ], + "spans": [ + { + "bbox": [ + 107, + 668, + 504, + 691 + ], + "type": "text", + "content": "Kate Knibbs. Ai slop is flooding medium. Wired, 2024c. URL https://www.wired.com/story/ai-generated-medium-posts-content-moderation/. Accessed: 2025-03-06." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 698, + 506, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 698, + 506, + 731 + ], + "spans": [ + { + "bbox": [ + 107, + 698, + 506, + 731 + ], + "type": "text", + "content": "Kate Knibbs. Some of substack's biggest newsletters rely on ai writing tools. Wired, 2024d. URL https://www.wired.com/story/substacks-writers-use-ai-chatgpt/. Accessed: 2025-03-07." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 731 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 107, + 81, + 507, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 507, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 507, + 116 + ], + "type": "text", + "content": "Dmitry Kobak, Rita González-Márquez, Emőke-Ágnes Horvát, and Jan Lause. Delving into chatgpt usage in academic writing through excess vocabulary. arXiv preprint arXiv:2406.07016, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 506, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 506, + 158 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 506, + 158 + ], + "type": "text", + "content": "Philippe Laban, Jesse Vig, Marti A Hearst, Caiming Xiong, and Chien-Sheng Wu. Beyond the chat: Executable and verifiable text-editing with llms. arXiv preprint arXiv:2309.15337, 2023." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 165, + 504, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 165, + 504, + 190 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 504, + 190 + ], + "type": "text", + "content": "Nathan Lambert and Roberto Calandra. The alignment ceiling: Objective mismatch in reinforcement learning from human feedback. arXiv preprint arXiv:2311.00168, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 196, + 506, + 232 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 196, + 506, + 232 + ], + "spans": [ + { + "bbox": [ + 105, + 196, + 506, + 232 + ], + "type": "text", + "content": "Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, et al. Rewardbench: Evaluating reward models for language modeling. arXiv preprint arXiv:2403.13787, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 237, + 506, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 237, + 506, + 259 + ], + "spans": [ + { + "bbox": [ + 105, + 237, + 506, + 259 + ], + "type": "text", + "content": "Timothy Laquintano and Annette Vee. Ai and the everyday writer. PMLA, 139(3):527-532, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 268, + 506, + 313 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 268, + 506, + 313 + ], + "spans": [ + { + "bbox": [ + 105, + 268, + 506, + 313 + ], + "type": "text", + "content": "Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, et al. Rlaif vs. rlhf: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 320, + 504, + 355 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 320, + 504, + 355 + ], + "spans": [ + { + "bbox": [ + 105, + 320, + 504, + 355 + ], + "type": "text", + "content": "Jinsook Lee, A. J. Alvero, Thorsten Joachims, and René F. Kizilcec. Poor alignment and steerability of large language models: Evidence from college admission essays. 2025. URL https://apisemantic scholar.org/CorpusID:277321621." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 361, + 504, + 408 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 361, + 504, + 408 + ], + "spans": [ + { + "bbox": [ + 105, + 361, + 504, + 408 + ], + "type": "text", + "content": "Mina Lee, Katy Ilonka Gero, John Joon Young Chung, Simon Buckingham Shum, Vipul Raheja, Hua Shen, Subhashini Venugopalan, Thiemo Wambsganss, David Zhou, Emad A Alghamdi, et al. A design space for intelligent and interactive writing assistants. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pp. 1-35, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 415, + 506, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 415, + 506, + 449 + ], + "spans": [ + { + "bbox": [ + 105, + 415, + 506, + 449 + ], + "type": "text", + "content": "Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation. arXiv preprint arXiv:2308.06259, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 456, + 504, + 513 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 456, + 504, + 513 + ], + "spans": [ + { + "bbox": [ + 105, + 456, + 504, + 513 + ], + "type": "text", + "content": "Zhuoyan Li, Chen Liang, Jing Peng, and Ming Yin. The value, benefits, and concerns of generative ai-powered assistance in writing. In Proceedings of the CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA, 2024. Association for Computing Machinery. ISBN 9798400703300. doi: 10.1145/3613904.3642625. URL https://doi.org/10.1145/3613904.3642625." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 520, + 506, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 520, + 506, + 555 + ], + "spans": [ + { + "bbox": [ + 105, + 520, + 506, + 555 + ], + "type": "text", + "content": "Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, et al. Mapping the increasing use of llms in scientific papers. arXiv preprint arXiv:2404.01268, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 562, + 506, + 597 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 562, + 506, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 562, + 506, + 597 + ], + "type": "text", + "content": "Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. The widespread adoption of large language model-assisted writing across society. arXiv preprint arXiv:2502.09747, 2025." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 603, + 506, + 639 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 603, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 105, + 603, + 506, + 639 + ], + "type": "text", + "content": "Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 644, + 506, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 644, + 506, + 680 + ], + "spans": [ + { + "bbox": [ + 105, + 644, + 506, + 680 + ], + "type": "text", + "content": "Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 687, + 506, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 687, + 506, + 731 + ], + "spans": [ + { + "bbox": [ + 105, + 687, + 506, + 731 + ], + "type": "text", + "content": "Zhexiong Liu, Diane Litman, Elaine Wang, Tianwen Li, Mason Gobat, Lindsay Clare Matsumura, and Richard Correnti. erevise+ rf: A writing evaluation system for assessing student essay revisions and providing formative feedback. arXiv preprint arXiv:2501.00715, 2025." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 733 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "type": "text", + "content": "Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Explanation-based finetuning makes models more robust to spurious cues. arXiv preprint arXiv:2305.04990, 2023." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 505, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 505, + 158 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 505, + 158 + ], + "type": "text", + "content": "Guillermo Marco, Julio Gonzalo, Ramón del Castillo, and María Teresa Mateo Girona. Pron vs prompt: Can large language models already challenge a world-class fiction author at creative text writing? arXiv preprint arXiv:2407.01119, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 507, + 220 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 507, + 220 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 507, + 220 + ], + "type": "text", + "content": "Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. Co-writing screenplays and theatre scripts with language models: Evaluation by industry professionals. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI '23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450394215. doi: 10.1145/3544548.3581225. URL https://doi.org/10.1145/3544548.3581225." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 226, + 507, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 226, + 507, + 274 + ], + "spans": [ + { + "bbox": [ + 105, + 226, + 507, + 274 + ], + "type": "text", + "content": "Piotr Mirowski, Juliette Love, Kory Mathewson, and Shakir Mohamed. A robot walks into a bar: Can language models serve as creativity supporttools for comedy? an evaluation of llms' humour alignment with comedians. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1622-1636, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 279, + 505, + 302 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 279, + 505, + 302 + ], + "spans": [ + { + "bbox": [ + 105, + 279, + 505, + 302 + ], + "type": "text", + "content": "OpenAI. Introducing openai o1 preview. https://openai.com/index/introducing-openai-o1-preview/, 2024. Accessed: 2025-03-20." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 308, + 507, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 308, + 507, + 354 + ], + "spans": [ + { + "bbox": [ + 105, + 308, + 507, + 354 + ], + "type": "text", + "content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 361, + 505, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 361, + 505, + 384 + ], + "spans": [ + { + "bbox": [ + 105, + 361, + 505, + 384 + ], + "type": "text", + "content": "Vishakh Padmakumar and He He. Does writing with language models reduce content diversity? arXiv preprint arXiv:2309.05196, 2023." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 391, + 505, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 391, + 505, + 415 + ], + "spans": [ + { + "bbox": [ + 105, + 391, + 505, + 415 + ], + "type": "text", + "content": "Jane Pan, He He, Samuel R Bowman, and Shi Feng. Spontaneous reward hacking in iterative self-refinement. arXiv preprint arXiv:2407.04549, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 421, + 507, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 421, + 507, + 455 + ], + "spans": [ + { + "bbox": [ + 105, + 421, + 507, + 455 + ], + "type": "text", + "content": "Arjun Panickssery, Samuel Bowman, and Shi Feng. Llm evaluators recognize and favor their own generations. Advances in Neural Information Processing Systems, 37:68772-68802, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 462, + 505, + 496 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 462, + 505, + 496 + ], + "spans": [ + { + "bbox": [ + 105, + 462, + 505, + 496 + ], + "type": "text", + "content": "Jenna Russell, Marzena Karpinska, and Mohit Iyyer. People who frequently use chatgpt for writing tasks are accurate and robust detectors of ai-generated text. arXiv preprint arXiv:2501.15654, 2025." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 502, + 507, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 502, + 507, + 528 + ], + "spans": [ + { + "bbox": [ + 105, + 502, + 507, + 528 + ], + "type": "text", + "content": "Chantal Shaib, Yanai Elazar, Junyi Jessy Li, and Byron C Wallace. Detection and measurement of syntactic templates in generated text. arXiv preprint arXiv:2407.00211, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 533, + 507, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 533, + 507, + 567 + ], + "spans": [ + { + "bbox": [ + 105, + 533, + 507, + 567 + ], + "type": "text", + "content": "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 574, + 507, + 609 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 574, + 507, + 609 + ], + "spans": [ + { + "bbox": [ + 105, + 574, + 507, + 609 + ], + "type": "text", + "content": "Tianchun Wang, Yanzhou Chen, Zichuan Liu, Zhanwen Chen, Haifeng Chen, Xiang Zhang, and Wei Cheng. Humanizing the machine: Proxy attacks to mislead llm detectors. arXiv preprint arXiv:2410.19230, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 615, + 507, + 662 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 615, + 507, + 662 + ], + "spans": [ + { + "bbox": [ + 105, + 615, + 507, + 662 + ], + "type": "text", + "content": "Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. arXiv preprint arXiv:2412.13663, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 668, + 505, + 691 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 668, + 505, + 691 + ], + "spans": [ + { + "bbox": [ + 105, + 668, + 505, + 691 + ], + "type": "text", + "content": "Sarah Wiegrefe, Ana Marasovic, and Noah A Smith. Measuring association between labels and free-text rationales. arXiv preprint arXiv:2010.12762, 2020." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 697, + 505, + 733 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 697, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 697, + 505, + 733 + ], + "type": "text", + "content": "Tongshuang Wu, Michael Terry, and Carrie Jun Cai. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems, pp. 1-22, 2022." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 239 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 504, + 157 + ], + "type": "text", + "content": "Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ippolito. Wordcraft: story writing with large language models. In Proceedings of the 27th International Conference on Intelligent User Interfaces, pp. 841-852, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "type": "text", + "content": "Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, et al. Pytorch fsdp: experiences on scaling fully sharded data parallel. arXiv preprint arXiv:2304.11277, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "type": "text", + "content": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P Xing, et al. Lmsys-chat-lm: A large-scale real-world llm conversation dataset. arXiv preprint arXiv:2309.11998, 2023." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 80, + 185, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 80, + 185, + 95 + ], + "spans": [ + { + "bbox": [ + 105, + 80, + 185, + 95 + ], + "type": "text", + "content": "A Appendix" + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 113, + 119, + 496, + 328 + ], + "blocks": [ + { + "bbox": [ + 113, + 119, + 496, + 328 + ], + "lines": [ + { + "bbox": [ + 113, + 119, + 496, + 328 + ], + "spans": [ + { + "bbox": [ + 113, + 119, + 496, + 328 + ], + "type": "table", + "html": "
Which is better quality writing?
A: One evening – it was our last day – in Tours, we had dinner in a brightly-lit restaurant where the walls were lined with mirrors, frequented by a sophisticated clientele. [...] made with “pig slop,” white and tasteful. [...] expressing resentment at having been treated with contempt because we were not chic customers who ate “à la carte.”Paragraph B is of higher quality\nConciseness and Clarity: Paragraph B is more streamlined[...] tighter and more focused.\nEmotional Depth: Paragraph B captures the speaker's evolving realization [...] making the writing more impactful.\nImagery and Language: The use of phrases [...] adds nuance and emotional weight. In contrast, Paragraph A's descriptions is literal and lack [...]
B: We sat at the group table, my father and I, our reflections dim in the restaurant's mirror. The waitress barely glanced our way. I adjusted my glasses, [...] I realize now that what I witnessed that day was not just a meal but the precise moment when social class became visible to me, manifested in a pot of yogurt, in the waitress's averted eyes, [...]
Overall, Paragraph B delivers a more poignant [...]
", + "image_path": "53481bb60dc76404125107849db74655f4babc96c76f0e025a231c030dd3d169.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 388, + 242, + 402 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 388, + 242, + 402 + ], + "spans": [ + { + "bbox": [ + 105, + 388, + 242, + 402 + ], + "type": "text", + "content": "A.1 Example LAMP Sample" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 411, + 504, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 411, + 504, + 434 + ], + "spans": [ + { + "bbox": [ + 104, + 411, + 504, + 434 + ], + "type": "text", + "content": "Table 4 shows an example from LAMP data with writing prompt and response before and after edit." + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 106, + 447, + 513, + 634 + ], + "blocks": [ + { + "bbox": [ + 104, + 335, + 504, + 359 + ], + "lines": [ + { + "bbox": [ + 104, + 335, + 504, + 359 + ], + "spans": [ + { + "bbox": [ + 104, + 335, + 504, + 359 + ], + "type": "text", + "content": "Table 3: LLM preferring its own writing (Paragraph B) over Paragraph A written by Annie Ernaux, 2022 Nobel Prize in Literature." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 447, + 513, + 634 + ], + "lines": [ + { + "bbox": [ + 106, + 447, + 513, + 634 + ], + "spans": [ + { + "bbox": [ + 106, + 447, + 513, + 634 + ], + "type": "table", + "html": "
PromptWhat happened while you were sitting in your car, waiting for a call, amidst the torrential rain and chaotic scene in La Conchita?
Edited Resp.As I sat in my car, the rain drummed against the roof, a relentless beat that mirrored the anxiety thrumming through my veins. I was waiting for a call, any call, that would give me a reason to move, to act, to escape the suffocating stillness. La Conchita's streets were a mess. Cars hydroplaning on the flooded roads, People scurrying for cover like ants from a disturbed nest. I watched as a woman, her hair plastered to her face, struggled to free her stuck umbrella, Her eyes darting darted towards me with a fleeting plea for help. I looked away, feeling the weight of my inaction. The rain-soaked world outside seemed to be shrinking, the droplets on my windshield coalescing into tiny mirrors that reflected my own uncertainty. My phone, silent and unyielding, lay on the passenger seat, a-constant reminder of my powerlessness. I thought of all the what ifs, the maybe, the possibilities that hung in the balance. The rain intensified, drumming out a rhythm that seemed to match the beat of my heart. In the chaos, I felt a strange sense of calm, as if the storm was washing away my doubts, leaving only the stark reality of the present. And yet, I remained frozen, waiting for that call, that spark, that would set me in motion.
", + "image_path": "e1efa10af565f7bba9f7532bac21e8a2f89791fb4dcb049a6d4a833edbd6aa2f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 641, + 506, + 665 + ], + "lines": [ + { + "bbox": [ + 104, + 641, + 506, + 665 + ], + "spans": [ + { + "bbox": [ + 104, + 641, + 506, + 665 + ], + "type": "text", + "content": "Table 4: An example from LAMP. Text in bold with strikethrough represents edited span. The normalized Initial and Final Writing Quality scores are 3.3 and 7.0 respectively" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 696, + 297, + 709 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 696, + 297, + 709 + ], + "spans": [ + { + "bbox": [ + 105, + 696, + 297, + 709 + ], + "type": "text", + "content": "A.2 Generative WRQM Prompt Formats" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 719, + 433, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 719, + 433, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 719, + 433, + 733 + ], + "type": "text", + "content": "Table 5 shows a P and R style training prompt thats used to train WQRMs" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 106, + 79, + 519, + 308 + ], + "blocks": [ + { + "bbox": [ + 106, + 79, + 519, + 308 + ], + "lines": [ + { + "bbox": [ + 106, + 79, + 519, + 308 + ], + "spans": [ + { + "bbox": [ + 106, + 79, + 519, + 308 + ], + "type": "table", + "html": "
P{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given two paragraphs of writing for a given instruction.\\nYour task is to determine which paragraph is overall better in terms of writing quality.\\nParagraph 1:\\nAfter her father's passing, Marina and her family [......]\\nParagraph 2:\\n[......] had cherished so deeply.\\n\\nYou must produce your answer in the following JSON format:\\n{"preference":"1-2"}\\nwhere 'preference' should be "1" if you think Paragraph 1 is better, "2" if you think Paragraph 2 is better.\\n", "role": "user"}
{"content": {""preference":"2"},{"role": "assistant"}
R{"content": "You are an AI assistant who has knowledge about creative writing.", "role": "system"}
{"content": "You are given a paragraph of creative writing. You must score it on a scale from 1 to 10, where 1 is the lowest quality and 10 is the highest quality.\\nParagraph:\\nThe rich history, and recent revitalisation of Santa Barbara are most clear [......]\\nYou must produce your answer in the following JSON format:\\n{"score": 1}\\nwhere 'score' is an integer between 1 and 10.", "role": "user"}
{"content": {""score":"7"},{"role": "assistant"}
", + "image_path": "be9fc66a9c8361b04fe2c324c844e669e41cd99490850abdb1afe6b0c50ace73.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 349, + 373, + 361 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 349, + 373, + 361 + ], + "spans": [ + { + "bbox": [ + 105, + 349, + 373, + 361 + ], + "type": "text", + "content": "A.3 Example Rationales For Generative WQRM Training" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 370, + 504, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 370, + 504, + 404 + ], + "spans": [ + { + "bbox": [ + 104, + 370, + 504, + 404 + ], + "type": "text", + "content": "Table 6 shows the silver natural language rationales for P and R mode. In P its more contrastive in nature while in R its an absolute critique. Both rationales cites excerpts from the original paragraph." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 106, + 416, + 526, + 613 + ], + "blocks": [ + { + "bbox": [ + 127, + 315, + 483, + 328 + ], + "lines": [ + { + "bbox": [ + 127, + 315, + 483, + 328 + ], + "spans": [ + { + "bbox": [ + 127, + 315, + 483, + 328 + ], + "type": "text", + "content": "Table 5: Training prompts in P and R setting for instruction tuning Llama3.1 70B" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 416, + 526, + 613 + ], + "lines": [ + { + "bbox": [ + 106, + 416, + 526, + 613 + ], + "spans": [ + { + "bbox": [ + 106, + 416, + 526, + 613 + ], + "type": "table", + "html": "
Paragraph 2 is more concise and direct in its storytelling, avoiding the overwrought metaphors and clichéd language found in Paragraph 1. For example, while Paragraph 1 uses flowery phrases like “a delicate bloom unfurling in the wake of a harsh winter” to describe the mother’s recovery, Paragraph 2 simply states “Marina’s mother slowly emerged from her all-consuming grief.” The second paragraph also maintains emotional authenticity without becoming melodramatic. The first paragraph’s phrases like “brick by brick, memory by memory” and “the resilience of the human spirit” feel forced and sentimental, whereas Paragraph 2 conveys the same emotional journey with more restraint and impact. The shorter length of Paragraph 2 also helps maintain the story’s momentum without getting bogged down in unnecessary elaboration.
The paragraph attempts to capture a poignant moment of parent-child separation but relies too heavily on telling rather than showing, with lines like “I felt a pang of guilt only a parent could know” and “I realized I was facing my own reluctance.” The emotional weight of the situation is spelled out rather than revealed through action or specific detail. While the core idea is relatable, the writing lacks distinctive imagery or memorable turns of phrase that would elevate it beyond the obvious. The final metaphor about “running up the charges to fill the space on my lighter bill” feels forced and doesn’t quite land effectively. The narrative maintains a consistent tone but remains in safe, conventional territory without taking any stylistic risks that might make it more compelling.
", + "image_path": "3459785c0ac01b03e2827f0c34f42c6897261eb4c0508044302218241bab4639.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 154, + 621, + 455, + 635 + ], + "lines": [ + { + "bbox": [ + 154, + 621, + 455, + 635 + ], + "spans": [ + { + "bbox": [ + 154, + 621, + 455, + 635 + ], + "type": "text", + "content": "Table 6: Natural language rationale for P and R modes respectively" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 655, + 173, + 667 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 655, + 173, + 667 + ], + "spans": [ + { + "bbox": [ + 105, + 655, + 173, + 667 + ], + "type": "text", + "content": "A.4 Datasets" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 676, + 506, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 676, + 506, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 676, + 506, + 734 + ], + "type": "text", + "content": "Art or Artifice In prior work Chakrabarty et al. (2024a) evaluate writing quality in flash fiction (1,500-2,500 words). The dataset includes 12 writing prompts based on New Yorker stories, each with four responses: the original story plus three LLM-generated versions from GPT-3.5, GPT-4 and Claude v1.3. Three expert annotators ranked all four stories for each prompt, with results aggregated into majority preferences for each story pair. From the 12" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "content": "prompts and all possible response pairs (4C2), the dataset contains 144 preference samples (including both AB and BA orderings). " + }, + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "inline_equation", + "content": "25\\%" + }, + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "content": " are Human-AI comparisons, while " + }, + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "inline_equation", + "content": "75\\%" + }, + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "content": " are AI-AI comparisons." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "spans": [ + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "text", + "content": "LAMP-test The LAMP corpus (Chakrabarty et al., 2024b) test set focuses on short-form creative writing (200-400 words), including fiction and non-fiction. It contains 201 triplets, each with a writing instruction and three responses: (1) AI-written, (2) AI-written+AI-edited, and (3) AI-written+AI-edited. Three professional writers ranked responses based on subjective preference, with results combined into a majority vote. For each instruction, all 3 possible response pairs were evaluated, creating 1206 total samples (by duplicating each pair in AB and BA order). Of these, " + }, + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "text", + "content": " are AI-HumanAI comparisons, and " + }, + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "inline_equation", + "content": "66\\%" + }, + { + "bbox": [ + 104, + 121, + 506, + 210 + ], + "type": "text", + "content": " are AI-AI comparisons." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 214, + 506, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 214, + 506, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 214, + 506, + 316 + ], + "type": "text", + "content": "Style Mimic In recent work, Anonymous (2025) examined if MFA students could mimic award-winning authors' styles. Specifically, 28 MFA students were first given 20 samples written by an award-winning author (such as Haruki Murakami, Yoko Ogawa, Percival Everett, Zadie Smith, Joan Didion), along with their style verbalized in text. They were then provided with a writing instruction to recreate an original paragraph from the author (typically 200-400 words) while imitating the style of the author to the best of their ability. This data includes 150 sample pairs (student imitation vs. original author response), with the original author's work implicitly preferred. All Mirror Human samples are Human-Human comparisons. Table 7 shows an example." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 319, + 507, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 319, + 507, + 441 + ], + "spans": [ + { + "bbox": [ + 104, + 319, + 507, + 441 + ], + "type": "text", + "content": "Synthetic Mirror Prior work on AI-detection (Emi & Spero, 2024) introduced \"synthetic mirrors,\" a two-step approach to generate writing pairs with implicit preferences. First, an LLM creates a mirror prompt from a human-written sample, extracting a plot summary and structured features (tone, style, length). Second, this prompt produces a synthetic mirror: an AI-generated response resembling the original's content and features. We selected 280 paragraphs from New Yorker flash fiction by award-winning authors (such as Alice Munro, Jhumpa Lahiri, Annie Ernaux etc). After extracting the content and structured features we devised our mirror prompts: Write a n word paragraph in the style of author in v voice given the content below.\\n plot. We generated mirror responses using GPT-4o and Claude-3.5 Sonnet, creating 560 Human-AI pairs with implicit preference for author-written responses. The benchmark consists of 1120 total preference pairs (each duplicated in AB and BA order)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 445, + 507, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 445, + 507, + 556 + ], + "spans": [ + { + "bbox": [ + 104, + 445, + 507, + 556 + ], + "type": "text", + "content": "LMArena LM Arena Zheng et al. (2023) is an open platform for crowdsourced AI benchmarking. A recently released anonymized instructions with responses and preference judgments indicated that creative writing comprises " + }, + { + "bbox": [ + 104, + 445, + 507, + 556 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 104, + 445, + 507, + 556 + ], + "type": "text", + "content": " of instructions, making it one of the three most common interaction types. From 100,000 creative writing samples, we filtered for (1) English content, (2) non-tied preferences, and (3) responses between 100-2,000 words. An initial inspection of the resulting 7,981 samples revealed that many didn't match strict creative writing definitions. We further filtered noisy samples using GPT-4o, resulting in 1,959 pairs. Due to LM Arena being larger in scale than other datasets in the benchmark, we do not include both order variants (AB/BA) in the dataset but ensure that the reference order is balanced within the dataset." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 170, + 81, + 434, + 152 + ], + "blocks": [ + { + "bbox": [ + 170, + 81, + 434, + 152 + ], + "lines": [ + { + "bbox": [ + 170, + 81, + 434, + 152 + ], + "spans": [ + { + "bbox": [ + 170, + 81, + 434, + 152 + ], + "type": "image", + "image_path": "609ed3b0176d87ba81dae21c3aa67d0d1c9ebf78b2a1dffa7bcc99380bacb78b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 159, + 506, + 183 + ], + "lines": [ + { + "bbox": [ + 104, + 159, + 506, + 183 + ], + "spans": [ + { + "bbox": [ + 104, + 159, + 506, + 183 + ], + "type": "text", + "content": "Figure 6: Three-Step Editing Pipeline to improve the writing quality of a first draft by: identifying idiosyncrasies, generating rewrites, and implementing the edits." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 190, + 350, + 204 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 190, + 350, + 204 + ], + "spans": [ + { + "bbox": [ + 105, + 190, + 350, + 204 + ], + "type": "text", + "content": "A.5 Writing Quality Benchmark Difficulty Analysis" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 112, + 229, + 272, + 332 + ], + "blocks": [ + { + "bbox": [ + 112, + 229, + 272, + 332 + ], + "lines": [ + { + "bbox": [ + 112, + 229, + 272, + 332 + ], + "spans": [ + { + "bbox": [ + 112, + 229, + 272, + 332 + ], + "type": "image", + "image_path": "2f61bc948d1b45dfae9dac29478ebd3b171164fde30bf53bb931146b3c8c35bb.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 164, + 335, + 267, + 344 + ], + "lines": [ + { + "bbox": [ + 164, + 335, + 267, + 344 + ], + "spans": [ + { + "bbox": [ + 164, + 335, + 267, + 344 + ], + "type": "text", + "content": "Worse Writing Sample" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 169, + 346, + 266, + 356 + ], + "lines": [ + { + "bbox": [ + 169, + 346, + 266, + 356 + ], + "spans": [ + { + "bbox": [ + 169, + 346, + 266, + 356 + ], + "type": "text", + "content": "Better Writing Sample" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 105, + 383, + 279, + 407 + ], + "lines": [ + { + "bbox": [ + 105, + 383, + 279, + 407 + ], + "spans": [ + { + "bbox": [ + 105, + 383, + 279, + 407 + ], + "type": "text", + "content": "Figure 5: Gap Analysis of WQ datasets leveraging the WQRM-PR model." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 284, + 211, + 506, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 211, + 506, + 312 + ], + "spans": [ + { + "bbox": [ + 284, + 211, + 506, + 312 + ], + "type": "text", + "content": "In order to understand the relative difficulty of the datasets within the WQ benchmark, we performed an analysis leveraging our trained WQRM model. For each sample (consisting of two writing samples with a known human preference), we computed the WQRM score for each sample, and compiled the result for each of the five datasets in WQRM. Figure 5 plots the average of the preferred vs. less-preferred scores on each dataset." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 284, + 316, + 507, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 316, + 507, + 426 + ], + "spans": [ + { + "bbox": [ + 284, + 316, + 507, + 426 + ], + "type": "text", + "content": "This analysis allows to make several observations. First, the average WQRM gap is directly proportional with model performance on the benchmark. The Synthetic Mirror dataset has the largest average gap according to WQRM-PR (2.4 on average), and we find that many models achieve very close to perfect performance " + }, + { + "bbox": [ + 284, + 316, + 507, + 426 + ], + "type": "inline_equation", + "content": "(98\\% +)" + }, + { + "bbox": [ + 284, + 316, + 507, + 426 + ], + "type": "text", + "content": " on this dataset. On the other hand, the gap (according to WQRM-PR) is very small on Style Mimic (0.12) and LMArena (0.02), which aligns with many models perform" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 426, + 506, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 526 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 526 + ], + "type": "text", + "content": "ing at or very slightly above chance on these datasets. Second, the absolute scores for the low and high samples are indicative of the origin of the samples. Style Mimic is the only dataset to include Human-Human comparisons (both written by professionals), and the scores of both the worse and better writing samples are high (7.57 and 7.69). LMArena has a similarly small gap, but achieved with lower pair scores (5.99 and 6.02). Third, we find that the WQ dataset includes a mix of high-gap (easy) and low-gap datasets. For low-gap samples, those can be with both having lower scores (two AI-generated samples), or two high-scoring samples (two human-written samples). This confirms the breadth of evaluation included in the WQ benchmark, which is a primary objective of the WQ benchmark." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 529, + 506, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 529, + 506, + 586 + ], + "spans": [ + { + "bbox": [ + 104, + 529, + 506, + 586 + ], + "type": "text", + "content": "We note that this analysis should be taken with a grain of salt: the WQRM-PR model is not a perfect score predictor, and is only a proxy for analysis, since true scores would require large-scale professional annotation (which is cost-prohibitive). But this analysis matches some expectations, and provides additional evidence of the proper calibration of the WQRM-PR model, and of the breadth of evaluation in the WQ benchmark." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 599, + 284, + 612 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 599, + 284, + 612 + ], + "spans": [ + { + "bbox": [ + 105, + 599, + 284, + 612 + ], + "type": "text", + "content": "A.6 Example Human Mimic Samples" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 620, + 504, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 620, + 504, + 643 + ], + "spans": [ + { + "bbox": [ + 104, + 620, + 504, + 643 + ], + "type": "text", + "content": "Table 7 shows an Expert-MFA contrast where both paragraphs are centered around the same semantic content and writing style" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 656, + 271, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 656, + 271, + 670 + ], + "spans": [ + { + "bbox": [ + 105, + 656, + 271, + 670 + ], + "type": "text", + "content": "A.7 Example COT Editing Prompt" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "content": "The prompt in Table 8 is generated automatically based on a sample from the LAMP dataset. An LLM is then finetuned on this prompt, effectively training it to function as a three-step editing pipeline that identifies problematic spans, rewrites the spans, and executes the edits into a final edited response." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 80, + 525, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 80, + 525, + 185 + ], + "spans": [ + { + "bbox": [ + 106, + 80, + 525, + 185 + ], + "type": "text", + "content": "I watched my mother. It was March, and outside, the sun glinted off the sidewalks and the icy edges of the snow. It was Saint Patrick's Day and the nurses brought my mother a square block of green Jell-O that sat quivering on the table beside her. It was the last full day of her life, and my mother did not sleep, she did not wake. She held her eyes still and open. They were the bluest thing in the room, perhaps in all of Duluth. Bluer than the lake. They were the color of the sky on the best day of your life. My mother died fast but not all of a sudden. A slow-burning fire when flames disappear to smoke and then smoke to air. She never once closed her eyes. First they were bitter and then they were bewildered and then they changed again to something else, to a state that I have had, finally, to see as heroic. Blue, blue eyes. Daggers of blue wanting and wanting. To stay, to stay." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 106, + 186, + 523, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 186, + 523, + 315 + ], + "spans": [ + { + "bbox": [ + 106, + 186, + 523, + 315 + ], + "type": "text", + "content": "My mother died on St. Patrick's Day. There was snow outside, and sunlight glanced off the ice and back into her hospital room. A cup of green Jell-O lingered untouched on its tray. My mother was unresponsive, but not quite asleep. Her eyes were open and vivid. There was an alertness in them still, but what she was looking at, I could not say. The bright blue of her irises was like a frozen lake, contrasting the dull wash of beige and gray shrouding the room. The sky lived inside my mother. It lived inside her then, too, even as we bided time together at the very end, knowing there was only one thing left for her to do. Her fading life was a fire flickering down to smoke—it would only last as long as there was something to burn. There was bitterness too in her eyes, then bewilderment, then a stoic, quiet heroism. A commitment to her own dignity. A promise to endure this final test, no matter how unfairly it had been imposed on her. Her eyes were so blue, my mother's eyes, a fierce blue, a frozen lake, a sheen of ice that refused to melt, even as the sun broke it apart." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 322, + 506, + 346 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 322, + 506, + 346 + ], + "spans": [ + { + "bbox": [ + 104, + 322, + 506, + 346 + ], + "type": "text", + "content": "Table 7: Imitation of Original Paragraph (Top Row) from Cheryl Strayed written by an MFA student" + } + ] + } + ], + "index": 3, + "type": "text" + }, + { + "bbox": [ + 105, + 366, + 304, + 379 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 366, + 304, + 379 + ], + "spans": [ + { + "bbox": [ + 105, + 366, + 304, + 379 + ], + "type": "text", + "content": "A.8 Expert Annotation Result Breakdown" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 387, + 506, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 387, + 506, + 411 + ], + "spans": [ + { + "bbox": [ + 104, + 387, + 506, + 411 + ], + "type": "text", + "content": "In Table 7, we present the results of the annotations from experts for each model (GPT-4o, Llama 3.1 70b) and writing domain (fiction, nonfiction, marketing)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 415, + 506, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 415, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 415, + 506, + 525 + ], + "type": "text", + "content": "At a high level, the responses selected by the WQRM model (Best Edit) achieve the best average rank in all six conditions. However, the selection aligns more with expert preference (in other words, the preference is more pronounced) for the fiction domain (rather than nonfiction) and for GPT-4o responses (rather than Llama 3.1 70b). We posit that this is due to the distribution of training data for the WQRM model, which included a majority of fiction samples and did not include Llama-generated responses. However, the fact that preference is still observed on the other domains (including marketing differs widely from fiction writing) is encouraging. Improving the generalization of the WQRM further can be accomplished by collecting annotations in additional writing domains, which can be used to train an improved WQRM model." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 539, + 189, + 552 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 539, + 189, + 552 + ], + "spans": [ + { + "bbox": [ + 105, + 539, + 189, + 552 + ], + "type": "text", + "content": "A.9 Comparison" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 559, + 505, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 559, + 505, + 594 + ], + "spans": [ + { + "bbox": [ + 104, + 559, + 505, + 594 + ], + "type": "text", + "content": "Table 9 shows 3 different versions of the same paragraph. First Draft along with edited versions (Random and Best Edit) with respect rewards from WQRM. Experts rank this triplet as Best Edit > Random Edit > First Draft." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 607, + 265, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 607, + 265, + 619 + ], + "spans": [ + { + "bbox": [ + 105, + 607, + 265, + 619 + ], + "type": "text", + "content": "A.10 Expert Annotation Interface" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 628, + 504, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 628, + 504, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 628, + 504, + 651 + ], + "type": "text", + "content": "Figure 8 shows the annotation interface that is provided to experts. They read 3 responses and rank them based on overall quality." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 114, + 132, + 496, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 132, + 496, + 178 + ], + "spans": [ + { + "bbox": [ + 114, + 132, + 496, + 178 + ], + "type": "text", + "content": "You are given a paragraph of creative writing. Your task is to improve the quality of the writing. You must identify specific spans that can be improved, then propose rewriting for each identified span, and finally return the entire paragraph with the proposed changes implemented." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 115, + 198, + 287, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 198, + 287, + 212 + ], + "spans": [ + { + "bbox": [ + 115, + 198, + 287, + 212 + ], + "type": "text", + "content": "Here is the paragraph you are editing:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 114, + 214, + 498, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 214, + 498, + 352 + ], + "spans": [ + { + "bbox": [ + 114, + 214, + 498, + 352 + ], + "type": "text", + "content": "The room was dimly lit, with the soft hum of machinery filling the silence. I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly. The screen flickered to life, a grainy black and white, like an ancient TV trying to find it's signal. Slowly, an image began to form; the unmistakable curve of a tiny head, the flutter of something I later learned was a heartbeat. I leaned forward, almost holding my breath, mesmerized by the play of light and shadows that composed this new life. Lila's grip tightened, and I glanced at her, finding her eyes glossy, a smile tugging at the corners of her mouth. I turned back to the screen, emotions swirling 2014awe, fear, hope. In that moment, the abstract idea of our baby transformed into something real, tangible. The room's dimness seemed to fold around us, making space for this silent, wordless connection. There, in the ghostly shapes on the monitor, I saw not just a child, but a future 2014a future that felt both incredibly close and impossibly distant." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 115, + 371, + 284, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 371, + 284, + 385 + ], + "spans": [ + { + "bbox": [ + 115, + 371, + 284, + 385 + ], + "type": "text", + "content": "Part 1: Identifying Problematic Spans" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 114, + 403, + 496, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 403, + 496, + 429 + ], + "spans": [ + { + "bbox": [ + 114, + 403, + 496, + 429 + ], + "type": "text", + "content": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' (Category: 'Awkward Word Choice and Phrasing')" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 115, + 430, + 495, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 430, + 495, + 456 + ], + "spans": [ + { + "bbox": [ + 115, + 430, + 495, + 456 + ], + "type": "text", + "content": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' (Category: 'Lack of Specificity and Detail')" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 115, + 475, + 342, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 475, + 342, + 488 + ], + "spans": [ + { + "bbox": [ + 115, + 475, + 342, + 488 + ], + "type": "text", + "content": "Part 2: Proposing Rewriting for Problematic Spans" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 114, + 508, + 494, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 508, + 494, + 533 + ], + "spans": [ + { + "bbox": [ + 114, + 508, + 494, + 533 + ], + "type": "text", + "content": "Span 1: 'The room was dimly lit, with the soft hum of machinery filling the silence' " + }, + { + "bbox": [ + 114, + 508, + 494, + 533 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 114, + 508, + 494, + 533 + ], + "type": "text", + "content": " 'The soft hum of machinery filled the room with an unfamiliar yet gentle melody.'" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 114, + 535, + 495, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 535, + 495, + 570 + ], + "spans": [ + { + "bbox": [ + 114, + 535, + 495, + 570 + ], + "type": "text", + "content": "Span 2: 'I sat beside Lila, squeezing her hand, as the technician swirled the wand over her belly' " + }, + { + "bbox": [ + 114, + 535, + 495, + 570 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 114, + 535, + 495, + 570 + ], + "type": "text", + "content": " 'I sat beside Lila, squeezing her hand as she lay on the table while the technician swirled the wand over her rounded belly.'" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 115, + 590, + 282, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 590, + 282, + 603 + ], + "spans": [ + { + "bbox": [ + 115, + 590, + 282, + 603 + ], + "type": "text", + "content": "Part 3: Implementing Proposed Edits" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 115, + 623, + 286, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 623, + 286, + 636 + ], + "spans": [ + { + "bbox": [ + 115, + 623, + 286, + 636 + ], + "type": "text", + "content": "((Written Response with Edits Executed))" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 646, + 506, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 646, + 506, + 681 + ], + "spans": [ + { + "bbox": [ + 104, + 646, + 506, + 681 + ], + "type": "text", + "content": "Table 8: Example COT editing instruction based on a sample from the LAMP dataset. The model is instruction to: (1) identify problematic spans, (2) rewrite the soans to improve writing, (3) execute all edits into the edited response." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 115, + 81, + 239, + 163 + ], + "blocks": [ + { + "bbox": [ + 115, + 81, + 239, + 163 + ], + "lines": [ + { + "bbox": [ + 115, + 81, + 239, + 163 + ], + "spans": [ + { + "bbox": [ + 115, + 81, + 239, + 163 + ], + "type": "image", + "image_path": "554d5dc700143458d1e47ba4855fde3721268e76f6053a7a818726a354ed637f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 244, + 82, + 369, + 164 + ], + "blocks": [ + { + "bbox": [ + 244, + 82, + 369, + 164 + ], + "lines": [ + { + "bbox": [ + 244, + 82, + 369, + 164 + ], + "spans": [ + { + "bbox": [ + 244, + 82, + 369, + 164 + ], + "type": "image", + "image_path": "431850aee3dd501e63f57f3e2ba338586b06f6ef5fc21563d10e350381277019.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 237, + 564, + 373, + 576 + ], + "lines": [ + { + "bbox": [ + 237, + 564, + 373, + 576 + ], + "spans": [ + { + "bbox": [ + 237, + 564, + 373, + 576 + ], + "type": "text", + "content": "Figure 8: Annotation interface" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 372, + 82, + 497, + 163 + ], + "blocks": [ + { + "bbox": [ + 372, + 82, + 497, + 163 + ], + "lines": [ + { + "bbox": [ + 372, + 82, + 497, + 163 + ], + "spans": [ + { + "bbox": [ + 372, + 82, + 497, + 163 + ], + "type": "image", + "image_path": "67e9bb817d878c15e6dda80357e8baa4e7b2fac21cb321a90be633ef7affed57.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 114, + 178, + 239, + 261 + ], + "blocks": [ + { + "bbox": [ + 114, + 178, + 239, + 261 + ], + "lines": [ + { + "bbox": [ + 114, + 178, + 239, + 261 + ], + "spans": [ + { + "bbox": [ + 114, + 178, + 239, + 261 + ], + "type": "image", + "image_path": "83564d9a51dbd08e07b8686abce714c359b3933b4ec07dbf8b7cc8e803428fe3.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 281, + 506, + 327 + ], + "lines": [ + { + "bbox": [ + 104, + 281, + 506, + 327 + ], + "spans": [ + { + "bbox": [ + 104, + 281, + 506, + 327 + ], + "type": "text", + "content": "Figure 7: Breakdown of results of the expert annotation we conducted for each of the three domains (fiction, nonfiction, marketing) and the two models (GPT-4o, LLama 3.1 70b). Overall, WQRM selection was most aligned with expert preference in the Fiction domain, and for GPT-4o generations." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 244, + 178, + 368, + 261 + ], + "blocks": [ + { + "bbox": [ + 244, + 178, + 368, + 261 + ], + "lines": [ + { + "bbox": [ + 244, + 178, + 368, + 261 + ], + "spans": [ + { + "bbox": [ + 244, + 178, + 368, + 261 + ], + "type": "image", + "image_path": "49e6ff59e1e7152046f36c227c36b4c8b74aeb4483e8bba762d084bc5492a284.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 373, + 178, + 497, + 261 + ], + "blocks": [ + { + "bbox": [ + 373, + 178, + 497, + 261 + ], + "lines": [ + { + "bbox": [ + 373, + 178, + 497, + 261 + ], + "spans": [ + { + "bbox": [ + 373, + 178, + 497, + 261 + ], + "type": "image", + "image_path": "0b9d340951461e10644d5f6940b5ec13fbd9689a7a99f4f2349125dd2c20d233.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 114, + 348, + 231, + 361 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 348, + 231, + 361 + ], + "spans": [ + { + "bbox": [ + 114, + 348, + 231, + 361 + ], + "type": "text", + "content": "Writing Sample Judgment" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 114, + 366, + 181, + 376 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 366, + 181, + 376 + ], + "spans": [ + { + "bbox": [ + 114, + 366, + 181, + 376 + ], + "type": "text", + "content": "Writing Instruction:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 114, + 378, + 444, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 378, + 444, + 385 + ], + "spans": [ + { + "bbox": [ + 114, + 378, + 444, + 385 + ], + "type": "text", + "content": "A daughter watches their mother during her final day of life that coincides with St Patricks Day in a Duluth hospital, observing her persistently open blue eyes as she transitions from life to death with a fierce desire to remain alive." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 118, + 407, + 282, + 416 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 407, + 282, + 416 + ], + "spans": [ + { + "bbox": [ + 118, + 407, + 282, + 416 + ], + "type": "text", + "content": "Please rank these writing samples from most to least preferred:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 117, + 418, + 134, + 423 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 418, + 134, + 423 + ], + "spans": [ + { + "bbox": [ + 117, + 418, + 134, + 423 + ], + "type": "text", + "content": "Sample 1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 118, + 429, + 239, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 429, + 239, + 497 + ], + "spans": [ + { + "bbox": [ + 118, + 429, + 239, + 497 + ], + "type": "text", + "content": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. She wanted more—more of what? More moments? More breaths? Her desire was palpable, and knowing what to do with it felt like holding fragile glass shards. How to speak? What to say? Inside me, everything felt cramped, too many things living at once without space. From the window behind us, I saw the darkness become green with celebration in reflection, and yet I continued to look at her. Her eyes. Her eyes were everything to me now. In those final hours, her gaze cemented in me her love, resilience, and bravery. The world outside no longer mattered." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 118, + 514, + 139, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 514, + 139, + 518 + ], + "spans": [ + { + "bbox": [ + 118, + 514, + 139, + 518 + ], + "type": "text", + "content": "Rank Sample 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 118, + 520, + 169, + 536 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 118, + 520, + 168, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 520, + 168, + 525 + ], + "spans": [ + { + "bbox": [ + 118, + 520, + 168, + 525 + ], + "type": "text", + "content": "Most Preferred Writing Option" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 118, + 526, + 169, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 526, + 169, + 531 + ], + "spans": [ + { + "bbox": [ + 118, + 526, + 169, + 531 + ], + "type": "text", + "content": "Second Favorite Writing Option" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 118, + 532, + 168, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 532, + 168, + 536 + ], + "spans": [ + { + "bbox": [ + 118, + 532, + 168, + 536 + ], + "type": "text", + "content": "Least Preferred Writing Option" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 121, + 542, + 148, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 542, + 148, + 547 + ], + "spans": [ + { + "bbox": [ + 121, + 542, + 148, + 547 + ], + "type": "text", + "content": "Submit Rankings" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 244, + 418, + 259, + 423 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 418, + 259, + 423 + ], + "spans": [ + { + "bbox": [ + 244, + 418, + 259, + 423 + ], + "type": "text", + "content": "Sample 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 244, + 429, + 365, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 429, + 365, + 497 + ], + "spans": [ + { + "bbox": [ + 244, + 429, + 365, + 497 + ], + "type": "text", + "content": "In the hushed corridors of the Duluth hospital, the bustle of St. Patrick's Day celebrations outside felt worlds away. I sat by my mother's bedside, the synthetic hum of medical machines filling the space between us. Her blue eyes, usually so full of life and vibrancy, were steady and intense, flicking to me with a sharpness that defied her frail body. It was as if she was silently insisting on one more moment, one more breath. Her desire to stay with me was palpable, wrapping us both in a fragile embrace. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled with emotions I wasn't ready to unpack. The world outside turned shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. In those final hours, her eyes told stories of love, resilience, and an unwavering fight to anchor herself in this world just a little longer." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 244, + 514, + 264, + 518 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 514, + 264, + 518 + ], + "spans": [ + { + "bbox": [ + 244, + 514, + 264, + 518 + ], + "type": "text", + "content": "Rank Sample 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 244, + 520, + 295, + 536 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 244, + 520, + 294, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 520, + 294, + 525 + ], + "spans": [ + { + "bbox": [ + 244, + 520, + 294, + 525 + ], + "type": "text", + "content": "Most Preferred Writing Option" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 244, + 526, + 295, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 526, + 295, + 531 + ], + "spans": [ + { + "bbox": [ + 244, + 526, + 295, + 531 + ], + "type": "text", + "content": "Second Favorite Writing Option" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 244, + 532, + 294, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 532, + 294, + 536 + ], + "spans": [ + { + "bbox": [ + 244, + 532, + 294, + 536 + ], + "type": "text", + "content": "Least Preferred Writing Option" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 370, + 418, + 385, + 423 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 418, + 385, + 423 + ], + "spans": [ + { + "bbox": [ + 370, + 418, + 385, + 423 + ], + "type": "text", + "content": "Sample 3" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 370, + 430, + 490, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 430, + 490, + 486 + ], + "spans": [ + { + "bbox": [ + 370, + 430, + 490, + 486 + ], + "type": "text", + "content": "In the corridors of the Duluth hospital, it was St. Patrick's Bed, but all the bustle and noise outside felt worlds away. I sat by my mother's bedside. The hum of the machines filled the silence between us. Her blue eyes flicked to me with an intensity that defied her frail body. She was silently insisting on one more moment, one more breath. Her desire to stay with me was almost tangible. I wanted to speak, to reassure her, but the words felt caught in the back of my throat, tangled. The world outside turned in festive shades of green in celebration, yet inside, my focus was drawn entirely to the fierce resolve in her gaze. Those final hours, the love we shared, her resilience, and her fight to stay tethered to our world remain imprinted on my mind to this day." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 370, + 515, + 390, + 519 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 515, + 390, + 519 + ], + "spans": [ + { + "bbox": [ + 370, + 515, + 390, + 519 + ], + "type": "text", + "content": "Rank Sample 3" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 370, + 520, + 420, + 536 + ], + "type": "list", + "angle": 0, + "index": 33, + "blocks": [ + { + "bbox": [ + 370, + 520, + 419, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 520, + 419, + 525 + ], + "spans": [ + { + "bbox": [ + 370, + 520, + 419, + 525 + ], + "type": "text", + "content": "Most Preferred Writing Option" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 370, + 526, + 420, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 526, + 420, + 531 + ], + "spans": [ + { + "bbox": [ + 370, + 526, + 420, + 531 + ], + "type": "text", + "content": "Second Favorite Writing Option" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 370, + 532, + 420, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 532, + 420, + 536 + ], + "spans": [ + { + "bbox": [ + 370, + 532, + 420, + 536 + ], + "type": "text", + "content": "Least Preferred Writing Option" + } + ] + } + ], + "index": 32 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 588, + 453, + 601 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 588, + 453, + 601 + ], + "spans": [ + { + "bbox": [ + 105, + 588, + 453, + 601 + ], + "type": "text", + "content": "A.11 Better Calibrated WQRM model for Content and Quality Experiment" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 104, + 609, + 506, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 609, + 506, + 712 + ], + "spans": [ + { + "bbox": [ + 104, + 609, + 506, + 712 + ], + "type": "text", + "content": "Since WQRM was only trained on samples from LAMP, which consists of AI-generated paragraphs edited by MFA students, it doesn't fully know how to reward higher-quality human writing. For this purpose, we added 100 paragraphs written by 5 award-winning authors (20 each) to our training data. We chose 5 authors who were part of the Style Mimic data. Each paragraph written by an award-winning author was assigned a score of 10.0. Even within writing from trained professionals, there is significant variability. To address this we source an additional 80 independent paragraphs written by MFA students published in prestigious literary magazines such as Electric Lit, Joyland, Paris Review and add to our training data. Each paragraph written by an MFA student was assigned a score of " + }, + { + "bbox": [ + 104, + 609, + 506, + 712 + ], + "type": "inline_equation", + "content": "7.5^{7}" + }, + { + "bbox": [ + 104, + 609, + 506, + 712 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 36 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 468, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 468, + 733 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 468, + 733 + ], + "type": "text", + "content": "This was a design decision where 5 is average and 10 is the best, and 7.5 is a mid-point." + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 38 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 81, + 548, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 81, + 548, + 269 + ], + "spans": [ + { + "bbox": [ + 106, + 81, + 548, + 269 + ], + "type": "text", + "content": "At 22, I handed my first completed novel to my father, not fully aware of the meaning it held for both of us. He took it, eyes softening as he turned the pages, lingering just enough to let me know he saw more than words. Then came the moment that stayed with me: an embrace that spoke volumes, a quiet prediction slipping past his lips, that one day I'd earn a place among the literary greats. Somehow, those words anchored themselves in my mind, reassuring in their sureness. Through the highs and lows of my writing career, his faith in me never wavered, echoing in my thoughts whenever doubt crept in. His death in 2002 marked an inevitable yet profound shift-suddenly, the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace, and his words, that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and for that, those moments serve as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize he had foreseen, I wished he could have been there to witness it, though in a way he already had been all along. Dreams can be crowded with voices that spur us on or pull us back, but his was the one that guided me, quiet and steady. As I continue to write, the memory of my father's belief remains a compass, gently reminding me of where I've been and where I might yet go \"score\": 3.30" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 106, + 270, + 533, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 270, + 533, + 445 + ], + "spans": [ + { + "bbox": [ + 106, + 270, + 533, + 445 + ], + "type": "text", + "content": "At 22, I handed my first completed novel to my father, and in that moment I became aware of the significance it held for us both. He took it and began to read, lingering just enough to let me know he saw more than words. Afterward, he embraced me and said one day I'd earn a place among the literary greats. Before then, he had said little about my writing, and these words anchored themselves in my mind, reassuring in their sureness. He had never said anything like it before, but he continued to echo that faith through the highs and lows of my career. His death in 2002 marked an inevitable yet profound shift. Suddenly the world felt heavier. Sorting through his things, I found the suitcase he left me, an ordinary object now imbued with significance. Inside were scattered remnants of our shared past, but it was that embrace and his words that lingered strongest. He had seen something in me that I hadn't yet grasped myself, and those moments served as an enduring source of comfort and motivation. Years later, when I finally stood on stage accepting the literary prize—the only prize—he had foreseen, I wished he could have been there to witness it. Dreams can be hostile to our hopes, but his was the one that guided me; his quietness was steady. Now, the memory of my father's belief remains a compass; I wish I could send him an update. \"score\": 4.43" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 448, + 523, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 448, + 523, + 572 + ], + "spans": [ + { + "bbox": [ + 106, + 448, + 523, + 572 + ], + "type": "text", + "content": "At 22, I handed my first completed novel to my father, not fully prepared for what it might mean. He took it, eyes softening as he turned the pages, lingering long enough, I felt, to take in the feeling of things. Finally, we embraced, and he leaned back to say what I hadn't dared to hope—that one day I'd be among the literary greats. No matter how tough things got or how much death loomed over me, I was comforted by those words, almost sure of their truth. His death in 2002 brought with it an unwelcome heaviness. I found significance even in his old suitcase, which I kept, shuffling through it fondly. There were plenty of other mementos, too, but I'd always have the memory of that embrace, the words. Years later, when I finally stood on stage accepting the literary prize he'd foreseen, I wished he could have been there to witness it. Whatever noise came, whatever doubt, his voice led me quietly out of it. I swear I can still hear him now. \"score\": 6.84" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 582, + 504, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 582, + 504, + 605 + ], + "spans": [ + { + "bbox": [ + 106, + 582, + 504, + 605 + ], + "type": "text", + "content": "Table 9: (a) First Draft (b) Random Edit (c) Best Edit along with their rewards assigned by WQRM." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 625, + 504, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 625, + 504, + 660 + ], + "spans": [ + { + "bbox": [ + 106, + 625, + 504, + 660 + ], + "type": "text", + "content": "Publication at a venue already means these paragraphs have undergone scrutiny and are of decent quality. After adding these 180 samples to LAMP-PR training set, we retrained WQRM." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 111, + 321, + 520, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 321, + 520, + 469 + ], + "spans": [ + { + "bbox": [ + 111, + 321, + 520, + 469 + ], + "type": "text", + "content": "This paragraph is written in the first person and revolves around a family Christmas gathering. The narrator reflects on how her father gave her a generous cash gift and invited her to Disney World with his new family. The narrator declined, fabricating an excuse about school, despite feeling the emotional distance growing between her, her father, and his new partner, Chitra. The narrators half-sisters, Rupa and Piu, were upset by this decision, not understanding why she doesn't want to join them. The narrator felt a sense of responsibility to uphold the memory of her late mother, just as Rupa and Piu symbolized their own father's legacy, while also sensing that both Chitra and her father are relieved by her decision to stay behind. The paragraph captures the emotional complexities of blended family dynamics, grief, and feelings of displacement during what should be a celebratory time." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 246, + 478, + 365, + 490 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 478, + 365, + 490 + ], + "spans": [ + { + "bbox": [ + 246, + 478, + 365, + 490 + ], + "type": "text", + "content": "Table 10: Detailed Content" + } + ] + } + ], + "index": 2, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_content_list.json b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4a8815a25840c770a7f7b7565bf003c1acf2c212 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_content_list.json @@ -0,0 +1,1475 @@ +[ + { + "type": "text", + "text": "Integrated Sensing and Communications for Pinching-Antenna Systems (PASS)", + "text_level": 1, + "bbox": [ + 140, + 69, + 854, + 119 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Zheng Zhang, Zhaolin Wang, Xidong Mu Bingtao He, Jian Chen, and Yuanwei Liu", + "bbox": [ + 171, + 127, + 797, + 143 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract—An integrated sensing and communication (ISAC) design for pinching antenna systems (PASS) is proposed, where the pinching antennas are deployed to establish reliable line-of-sight communication and sensing links. More particularly, a separated ISAC design is proposed for the two-waveguide PASS, where one waveguide is used to emit the information-bearing signals for ISAC transmission while the other waveguide is used to receive the reflected echo signals. Based on this framework, a penalty-based alternating optimization algorithm is proposed to maximize the illumination power as well as ensure the communication quality-of-service requirement. Numerical results demonstrate that the proposed PASS-ISAC scheme outperforms the conventional antenna scheme.", + "bbox": [ + 73, + 164, + 491, + 329 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Index Terms—Beamforming design, integrated sensing and communication, pinching antenna systems.", + "bbox": [ + 75, + 335, + 491, + 362 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "I. INTRODUCTION", + "text_level": 1, + "bbox": [ + 215, + 388, + 351, + 402 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Fuelled by the burgeoning demands for massive data transmission and pervasive network coverage, flexible antennas have emerged as a promising technique for sixth-generation (6G) cellular systems. Benefiting from their ability to reconfigure the wireless channel, flexible antennas can significantly enhance the throughput of wireless networks. However, traditional flexible antennas (e.g., movable antennas [1] and fluid antennas [2]) merely permit the adjustment of the antenna position within a range of orders of magnitude comparable to the carrier wavelength. Against this backdrop, the pinching antenna has emerged [3], which is a type of dielectric waveguide-based leaky wave antenna. By applying dielectric particles to a particular point on the dielectric waveguide, a pinching antenna can be activated to establish EM radiation fields and form a communication area [4]. Then, the EM signal inside the dielectric waveguide will be radiated from the pinching antenna to free space with a defined phase shift adjustment (referred to as the pinching beamformer). Notably, as the dielectric waveguide can be pinched at any position to radiate radio waves, the pinching antenna can flexibly move along the dielectric waveguide over a length of dozens of meters, thereby relocating to the closest position to the receiver and creating reliable LoS links.", + "bbox": [ + 73, + 410, + 491, + 758 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "To enable emerging applications, such as autonomous driving, extended reality, and the Metaverse, sensing functionality is recognized as an important indicator of future networks.", + "bbox": [ + 73, + 758, + 491, + 804 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "Zheng Zhang, Bingtao He, and Jian Chen are with the School of Telecommunications Engineering, Xidian University, Xi'an 710071, China (e-mail: zhang_688@stu.xidian.edu.cn; bthe@xidian.edu.cn; jianchen@mail.xidian.edu.cn).", + "Zhaolin Wang is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: zhaolin.wang@qmul.ac.uk).", + "Xidong Mu is with Queen's University Belfast, Belfast, BT3 9DT, U.K. (email: x.mu@qub.ac.uk)", + "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk)." + ], + "bbox": [ + 73, + 816, + 491, + 945 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg", + "image_caption": [ + "Fig. 1. The separated ISAC design for PASS." + ], + "image_footnote": [], + "bbox": [ + 514, + 164, + 913, + 319 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In pursuit of this vision, the integrated sensing and communication (ISAC) technology has drawn significant attention recently [5], which aims to leverage the cellular network hardware platforms and dedicated signal processing algorithms to achieve the incorporation of communication and sensing functionalities. Recently, it has been claimed that conducting ISAC transmission in the pinching antenna systems (PASS) can further upgrade the communication and sensing (C&S) performance of the network [6]. On the one hand, the pinching antenna can be flexibly repositioned to augment the echo signal energy. On the other hand, the wide-range mobility characteristic of pinching antennas results in an antenna aperture spanning dozens of meters. It inherently enables nearfield sensing, e.g., the possibility of simultaneous angular and distance information estimation and even target velocity sensing, thereby offering a more comprehensive and accurate sensing of the surrounding environment. Nevertheless, as of the present moment, research in the PASS-ISAC remains conspicuously absent.", + "bbox": [ + 501, + 358, + 921, + 646 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Motivated by the above, this paper proposes a separated ISAC design for PASS. To elaborate, the base station (BS) is connected with two dielectric waveguides, where one waveguide is used to transmit the downlink signals, while the other is employed to collect the reflected echo signals from the target. We aim to maximize the illumination power at the target while satisfying the quality-of-service (QoS) requirement of the communication user by optimizing the pinching beamforming offered by the mobility of pinching antennas. A penalty-based alternating optimization (AO) algorithm is proposed to handle the non-convex optimization problem, where the positions of pinching antennas are updated in an element-wise manner. Numerical results evaluate the superiority of the proposed scheme over the baseline schemes.", + "bbox": [ + 503, + 646, + 921, + 858 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "II. SYSTEM MODEL AND PROBLEM FORMULATION", + "text_level": 1, + "bbox": [ + 531, + 878, + 893, + 892 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "As shown in Fig. 1, we consider a PASS-ISAC system, where a dual-function BS conveys with a single-antenna communication user while sensing a point-like target. The", + "bbox": [ + 503, + 898, + 921, + 946 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07709v3 [cs.IT] 12 May 2025", + "bbox": [ + 22, + 239, + 58, + 680 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "BS is connected with two dielectric waveguides of length $L$ , each of which consists of $N$ pinching antennas. To achieve the simultaneous C&S transmission, a separated ISAC design is proposed. Specifically, the downlink information-bearing signals are emitted from one waveguide (referred to as transmitting antennas). Then, the reflected echoes from the target would be collected at the other waveguide (referred to as receiving antennas), which are further transmitted to the BS for parameter estimation.", + "bbox": [ + 73, + 68, + 491, + 204 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "A three-dimensional (3D) coordination system is considered, where two dielectric waveguides extended from the BS are assumed to be parallel to the x-axis with respect to the x-o-y plane at a height $d$ . The position of the $n$ -th pinching antenna distributed along the transmitting and receiving dielectric waveguides can be denoted as $\\psi_{n}^{\\mathrm{p}} = (x_{n}^{\\mathrm{p}},0,d)$ and $\\psi_{n}^{\\mathrm{q}} = (x_{n}^{\\mathrm{q}},y^{\\mathrm{q}},d)$ . The communication user and sensing target are located in the x-o-y plane. Let $r_{\\mathrm{c}}$ and $\\varphi_{\\mathrm{c}}$ denote the distance and the azimuth angle of the communication user relative to the origin of the coordinate system. Thus, the coordinates of communication user is given by $\\psi^{\\mathrm{c}} = (r_{\\mathrm{c}}\\cos \\varphi_{\\mathrm{c}},r_{\\mathrm{c}}\\sin \\varphi_{\\mathrm{c}},0)$ . Similarly, the target is located in $\\psi^{\\mathrm{s}} = (r_{\\mathrm{s}}\\cos \\varphi_{\\mathrm{s}},r_{\\mathrm{s}}\\sin \\varphi_{\\mathrm{s}},0)$ . Furthermore, we assume the target is a static node or moves at a low speed. Thus, the Doppler effect is neglected in this work.", + "bbox": [ + 75, + 205, + 493, + 433 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "A. Channel Model", + "text_level": 1, + "bbox": [ + 73, + 450, + 207, + 463 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In the considered network, the pinching antennas are non-uniformly disposed on the dielectric waveguide covering the entire range of the user's activity, which implies that the aperture of the pinching antennas may have the same order of magnitude as the signal transmission distance. Without loss of accuracy, we adopt the spherical-wave-based nearfield channel model, where only the LoS path is considered. Consequently, the distance from the $n$ -th pinching antenna to the target is given by", + "bbox": [ + 73, + 469, + 491, + 604 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} r _ {n} ^ {\\zeta} \\left(r _ {\\zeta}, \\varphi_ {\\zeta}\\right) = \\left\\| \\psi^ {\\zeta} - \\psi_ {n} ^ {\\mathrm {p}} \\right\\| \\\\ = \\sqrt {r _ {\\zeta} ^ {2} - 2 r _ {\\zeta} \\cos \\varphi_ {\\zeta} x _ {n} ^ {\\mathrm {p}} + \\left(x _ {n} ^ {\\mathrm {p}}\\right) ^ {2} + d ^ {2}}, \\quad \\zeta \\in \\{\\mathrm {s}, \\mathrm {c} \\}, \\tag {1} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 76, + 611, + 488, + 670 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Thus, the free space channel vector from the transmitting antennas to the target and the communication user can be expressed as", + "bbox": [ + 73, + 676, + 491, + 722 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {\\mathrm {s}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {H}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 84, + 727, + 488, + 771 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {\\mathrm {c}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})} \\right] ^ {H}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 84, + 784, + 488, + 829 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\mathbf{x}^{\\mathrm{p}} = [x_1^{\\mathrm{p}},\\dots ,x_N^{\\mathrm{p}}]$ denotes the coordinates of pinching antennas, $\\lambda = \\frac{c}{f_{\\mathrm{c}}}$ denotes the wavelength, $f_{\\mathrm{c}}$ is the frequency of the carrier wave, $\\eta = \\frac{c^2}{16\\pi^2f_c^2}$ , and $c$ denotes the speed of light.", + "bbox": [ + 73, + 834, + 491, + 897 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, the BS aims to utilize the communication signal to achieve simultaneous communication and target sensing. Consider a coherent time block of length $T$ , the", + "bbox": [ + 73, + 898, + 491, + 946 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "communication channel condition and the sensing parameters are assumed to remain unchanged during one coherent time block. Thus, the emitted signal at the $t$ -th time slot is given by $s(t) \\in \\mathbb{C}$ , which is assumed to be normalized and independently distributed, i.e., $\\mathbb{E}\\{|s(t)|^2\\} = 1$ and $\\mathbb{E}\\{s(t)s^*(\\bar{t})\\} = 0$ . On receiving $s(t)$ , the dielectric waveguide radiates the signal $\\mathbf{x}(t) = \\sqrt{P_{\\mathrm{T}}} \\mathbf{g}(\\mathbf{x}^{\\mathrm{p}}) s(t)$ , where $\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})$ denotes the in-waveguide channel and can be expressed as", + "bbox": [ + 501, + 68, + 921, + 189 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {g} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\sqrt {\\alpha_ {1}} e ^ {- \\jmath \\theta_ {1}}, \\dots , \\sqrt {\\alpha_ {N}} e ^ {- \\jmath \\theta_ {N}} \\right] ^ {T}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 198, + 921, + 220 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\theta_{n}$ denotes the radiation phase shift at the $n$ -th pinching antenna, and $P_{\\mathrm{T}}$ denotes the transmit power at the BS. $\\alpha_{n}$ denotes the power allocation coefficients at the $n$ -th pinching antenna, which can be modeled as the equal power allocation model $\\sqrt{\\alpha_n} = \\sqrt{\\frac{\\alpha_s}{N}}$ [4] or the proportional power allocation model $\\sqrt{\\alpha_n} = \\delta (\\sqrt{1 - \\delta^2})^{n - 1}$ [7]. $\\delta = \\sqrt{1 - (1 - \\alpha_s)^{\\frac{1}{N}}}$ represents the proportional coefficient, and $\\alpha_{s} = \\sum_{n = 1}^{N}\\alpha_{n}$ denotes the radiation coefficient of pinching antennas. For ease of implementation, the equal power allocation model is considered in this paper. $\\theta_{n}$ is defined by $2\\pi \\eta_{\\mathrm{eff}}\\frac{\\|\\psi_0^{\\mathrm{p}} - \\psi_n^{\\mathrm{p}}\\|}{\\lambda}$ , where $\\psi_0^{\\mathrm{p}}$ denotes the location of the feed point, and $\\eta_{\\mathrm{eff}}$ denotes the effective refractive index of the dielectric waveguide.", + "bbox": [ + 503, + 229, + 921, + 422 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "B. Signal Model", + "text_level": 1, + "bbox": [ + 504, + 446, + 622, + 462 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "With the above channel model, it is readily observed that the positions of pinching antennas have a significant impact on both the free space channel $\\{\\mathbf{h}_{\\mathrm{s}}(\\mathbf{x}^{\\mathrm{p}}), \\mathbf{h}_{\\mathrm{c}}(\\mathbf{x}^{\\mathrm{p}})\\}$ and the in-waveguide channel $\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})$ . As a result, it becomes possible to establish favorable wireless propagation while manipulating the radiated characteristics of signals by altering the positions of pinching antennas in the PASS. To characterize the two aspects of the signal reconfiguration capabilities of pinching antennas, we refer to it as pinching beamforming in this paper. Let $\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})$ and $\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})$ denote the pinching beamforming for the communication user and the sensing target, which are also the functions of $\\mathbf{x}^{\\mathrm{p}}$ . $\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})$ and $\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})$ are given by", + "bbox": [ + 501, + 468, + 921, + 651 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)}, \\dots , \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)} \\right] ^ {T}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 519, + 657, + 919, + 704 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {T}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 522, + 722, + 919, + 767 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we consider an ideal activation model of the pinching antenna, i.e., continuous activation. It indicates that the pinching antennas can be activated at any position of the dielectric waveguide. Thus, the positions of pinching antennas satisfy", + "bbox": [ + 503, + 773, + 921, + 849 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X} = \\left\\{\\left| x _ {n} ^ {\\mathrm {p}} - x _ {m} ^ {\\mathrm {p}} \\right| \\geq \\Delta x (n \\neq m), x _ {n} ^ {\\mathrm {p}} \\in \\left[ - \\frac {L}{2}, \\frac {L}{2} \\right] \\right\\}, \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 509, + 857, + 919, + 905 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\Delta x$ represents the minimum antenna space between two adjacent pinching antennas.", + "bbox": [ + 503, + 914, + 921, + 945 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1) Communication Performance Metric: With the aforementioned signal model, the received signals at the communication user are given by", + "bbox": [ + 73, + 69, + 491, + 114 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} y (t) = \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {c}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) + n (t) \\\\ = \\sqrt {P _ {\\mathrm {T}}} \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) s (t) + n (t), \\tag {8} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 153, + 119, + 488, + 160 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\pmb {\\eta} = [\\eta^{\\frac{1}{2}},\\dots ,\\eta^{\\frac{1}{2}}]^{T}\\in \\mathbb{C}^{N\\times 1}$ is a constant vector, and $n(t)\\sim \\mathcal{CN}(0,\\sigma^2)$ denotes the additive white Gaussian noise (AWGN) at the communication user. Hence, the achievable rate of the communication user is given by", + "bbox": [ + 73, + 166, + 491, + 229 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\nR = \\log_ {2} \\left(1 + \\frac {P _ {\\mathrm {T}} \\left| \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\right| ^ {2}}{\\sigma^ {2}}\\right). \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 165, + 233, + 491, + 268 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2) Sensing Performance Metric: For target sensing, we adopt the illumination power as the performance metric, which characterizes the received sensing signal power at the target [8]. Thus, the illumination power with respect to azimuth angle $\\varphi_{\\mathrm{s}}$ and distance $r_{\\mathrm{s}}$ is given by", + "bbox": [ + 73, + 272, + 491, + 348 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} P _ {\\mathrm {s}} = \\mathbb {E} \\left\\{\\left| \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {s}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) \\right| ^ {2} \\right\\} \\\\ = P _ {\\mathrm {T}} \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\mathbf {v} ^ {H} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\boldsymbol {\\eta}. \\tag {10} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 156, + 353, + 488, + 406 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "C. Problem Formulation", + "text_level": 1, + "bbox": [ + 75, + 422, + 246, + 436 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this paper, we aim to maximize the illumination power $P(\\theta_{\\mathrm{s}}, r_{\\mathrm{s}})$ by designing the pinching beamformer, under the transmit power budget and communication QoS requirement, which is given by", + "bbox": [ + 73, + 441, + 491, + 502 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\mathrm {P} 1\\right) \\quad \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} P _ {\\mathrm {s}} \\tag {11a}\n$$\n", + "text_format": "latex", + "bbox": [ + 222, + 508, + 488, + 530 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\text {s . t .} \\quad R \\geq R _ {\\mathrm {Q o S}}, \\tag {11b}\n$$\n", + "text_format": "latex", + "bbox": [ + 233, + 532, + 488, + 549 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X}, \\tag {11c}\n$$\n", + "text_format": "latex", + "bbox": [ + 267, + 551, + 488, + 566 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $R_{\\mathrm{QoS}}$ denotes the QoS requirement of the communication user. The problem (P1) is challenging to solve due to the quadratic objective function and the coupled variables.", + "bbox": [ + 73, + 575, + 491, + 622 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "III. PINCHING BEAMFORMING OPTIMIZATION", + "text_level": 1, + "bbox": [ + 119, + 637, + 447, + 651 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we focus on the C&S transmission design by optimizing the pinching beamforming. To deal with the coupled optimization variables, a penalty-based AO algorithm is proposed, where $\\{\\mathbf{x}^{\\mathrm{p}}\\}$ is optimized in an element-wise manner.", + "bbox": [ + 73, + 656, + 491, + 729 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To facilitate the optimization, we can rewrite the problem (P1) as", + "bbox": [ + 73, + 731, + 491, + 761 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\mathrm {P} 2\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} | \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\tag {12a}\n$$\n", + "text_format": "latex", + "bbox": [ + 160, + 767, + 488, + 790 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {12b}\n$$\n", + "text_format": "latex", + "bbox": [ + 210, + 791, + 488, + 810 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n(1 1 c), \\tag {12c}\n$$\n", + "text_format": "latex", + "bbox": [ + 246, + 811, + 488, + 828 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\gamma_{\\mathrm{QoS}} = \\frac{2^{R_{\\mathrm{QoS}} - 1}}{P_{\\mathrm{T}}}$", + "bbox": [ + 73, + 835, + 230, + 854 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In order to deal with the intractable objective and constraints, we consider a penalty-based two-layer framework. To elaborate, we introduce auxiliary variables $\\tilde{\\mathbf{w}}$ and $\\tilde{\\mathbf{v}}$ to replace $\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})$ and $\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})$ , respectively. Thus, we have the equality constraints $\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})$ and $\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})$ . By relocating the equality constraint to the objective function and serving as a", + "bbox": [ + 73, + 854, + 491, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "penalty term, the problem (P2) can be equivalently rewritten as", + "bbox": [ + 503, + 69, + 919, + 97 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\mathrm {P} 3\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}} | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {v}} | ^ {2} - \\frac {1}{2 \\varrho} \\chi_ {1} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}\\right) \\tag {13a}\n$$\n", + "text_format": "latex", + "bbox": [ + 576, + 106, + 919, + 136 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {w}} | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {13b}\n$$\n", + "text_format": "latex", + "bbox": [ + 635, + 137, + 919, + 156 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left| \\tilde {\\mathbf {w}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {c}} ^ {2}}, \\tag {13c}\n$$\n", + "text_format": "latex", + "bbox": [ + 671, + 157, + 919, + 190 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left| \\tilde {\\mathbf {v}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {s}} ^ {2}}, \\tag {13d}\n$$\n", + "text_format": "latex", + "bbox": [ + 671, + 191, + 919, + 223 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n(1 1 \\mathrm {c}), \\tag {13e}\n$$\n", + "text_format": "latex", + "bbox": [ + 669, + 227, + 919, + 242 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\chi_{1}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}) = \\| \\tilde{\\mathbf{w}} -\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\| +\\| \\tilde{\\mathbf{v}} -\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\|$ and $\\varrho$ denotes the scaling factor of the penalty terms. Note that to avoid the infinite objective value, we introduce constraints (13c) and (13d), where $r_{\\min ,\\mathrm{c}} = \\sqrt{(r_{\\mathrm{c}}\\sin\\varphi_{\\mathrm{c}})^2 + d^2}$ and $r_{\\min ,\\mathrm{s}} = \\sqrt{(r_{\\mathrm{s}}\\sin\\varphi_{\\mathrm{s}})^2 + d^2}$ denote the lower bounds of the distances from an arbitrary pinching antenna to the communication user and target. The problem (P3) is equivalent to the problem (P1) as constraints (13c) and (13d) can be obtained from the (11c), which restricts pinching beamforming $\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}$ to the feasible region.", + "bbox": [ + 501, + 253, + 921, + 405 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To address the quadratic objective and constraints, we apply the SDR technique to rewrite the problem (P3) as follows.", + "bbox": [ + 503, + 406, + 919, + 436 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {14a}\n$$\n", + "text_format": "latex", + "bbox": [ + 539, + 446, + 919, + 477 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\text {s . t .} \\quad \\tilde {\\mathbf {W}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , c} ^ {2}}, \\tag {14b}\n$$\n", + "text_format": "latex", + "bbox": [ + 549, + 479, + 919, + 513 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\mathbf {V}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , s} ^ {2}}, \\tag {14c}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 513, + 919, + 546 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {W}}\\right) \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {14d}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 547, + 919, + 566 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {r a n k} (\\tilde {\\mathbf {W}}) = 1, \\operatorname {r a n k} (\\tilde {\\mathbf {V}}) = 1, \\tag {14e}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 569, + 919, + 585 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\tilde {\\mathbf {W}} \\succeq \\mathbf {0}, \\tilde {\\mathbf {V}} \\succeq \\mathbf {0}, \\tag {14f}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 588, + 919, + 606 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n(1 1 \\mathrm {c}), \\tag {14g}\n$$\n", + "text_format": "latex", + "bbox": [ + 584, + 609, + 919, + 626 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{w}^{H}(\\mathbf{x}^{\\mathrm{p}})$ , $\\tilde{\\mathbf{W}} = \\tilde{\\mathbf{w}}\\tilde{\\mathbf{w}}^{H}$ , $\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{v}^{H}(\\mathbf{x}^{\\mathrm{p}})$ , $\\tilde{\\mathbf{V}} = \\tilde{\\mathbf{v}}\\tilde{\\mathbf{v}}^{H}$ , and $\\chi_{2}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}) = \\| \\tilde{\\mathbf{W}} - \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} - \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}$ . To solve the problem (P4), we propose a penalty-based AO algorithm, which alternately optimizes $\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}$ and $\\{\\mathbf{x}^{\\mathrm{p}}\\}$ in the inner layer and updates $\\varrho$ in the outer layer.", + "bbox": [ + 503, + 637, + 921, + 728 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1) Inner layer iteration—subproblem with respect to $\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}$ : With the fixed $\\{\\mathbf{x}^{\\mathrm{p}}\\}$ , the problem (P4) is reduced to", + "bbox": [ + 504, + 731, + 919, + 773 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\mathrm {P} 5\\right) \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {15a}\n$$\n", + "text_format": "latex", + "bbox": [ + 550, + 782, + 919, + 815 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\text {s . t .} \\quad (1 4 b) - (1 4 f). \\tag {15b}\n$$\n", + "text_format": "latex", + "bbox": [ + 589, + 818, + 919, + 834 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To handle the rank-one constraint, we introduce non-negative auxiliary variables $\\{\\varpi_1,\\varpi_2\\}$ and employ the difference-of-convex (DC) relaxation method [9] to rewrite the (14c) as", + "bbox": [ + 503, + 845, + 921, + 892 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\left\\{ \\begin{array}{l} \\Re (\\operatorname {T r} (\\tilde {\\mathbf {W}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {w}} _ {\\max } \\tilde {\\mathbf {w}} _ {\\max } ^ {H}))) \\leq \\varpi_ {1}, \\\\ \\Re (\\operatorname {T r} (\\tilde {\\mathbf {V}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {v}} _ {\\max } \\tilde {\\mathbf {v}} _ {\\max } ^ {H}))) \\leq \\varpi_ {2}, \\end{array} \\quad i \\in \\{1, 2 \\}, \\right. \\tag {16}\n$$\n", + "text_format": "latex", + "bbox": [ + 522, + 901, + 919, + 941 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Algorithm 1 Iterative algorithm for rank-one solution.", + "text_level": 1, + "bbox": [ + 76, + 66, + 444, + 80 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1: Initialize $\\tilde{\\mathbf{v}}_{\\mathrm{max}}$ and $\\tilde{\\mathbf{w}}_{\\mathrm{max}}$ . Set a convergence accuracy $\\epsilon_{1}$ .", + "bbox": [ + 84, + 83, + 488, + 99 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2: repeat", + "text_level": 1, + "bbox": [ + 84, + 114, + 151, + 127 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "3: update $\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}},\\varpi_i\\}$ by solving the problem (P6).", + "4: update the eigenvectors $\\{\\tilde{\\mathbf{w}}_{\\mathrm{max}},\\tilde{\\mathbf{v}}_{\\mathrm{max}}\\}$", + "5: update $\\varrho_{i} = \\varrho_{i}\\bar{c}_{1}$ $(0 < \\bar{c}_1 < 1)$", + "6: until $\\sum_{i=1}^{2} \\varpi_i$ falls below a threshold of $\\epsilon_1$ ." + ], + "bbox": [ + 84, + 127, + 455, + 189 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\tilde{\\mathbf{w}}_{\\mathrm{max}}$ and $\\tilde{\\mathbf{v}}_{\\mathrm{max}}$ represent the eigenvectors corresponding to the maximum eigenvalues of $\\tilde{\\mathbf{W}}$ and $\\tilde{\\mathbf{V}}$ , respectively. As a result, the problem (P5) can be transformed into", + "bbox": [ + 73, + 213, + 491, + 258 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left. \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}, \\varpi_ {i}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) - \\sum_ {i = 1} ^ {2} \\frac {1}{2 \\varrho_ {i}} \\varpi_ {i} \\right. \\tag {17a}\n$$\n", + "text_format": "latex", + "bbox": [ + 78, + 265, + 488, + 318 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\ns. t. \\quad \\varpi_ {i} \\geq 0, i \\in \\{1, 2 \\}, \\tag {17b}\n$$\n", + "text_format": "latex", + "bbox": [ + 147, + 321, + 488, + 339 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n(1 4 b) - (1 4 f), (1 6), \\tag {17c}\n$$\n", + "text_format": "latex", + "bbox": [ + 183, + 342, + 488, + 357 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\varrho_{i}$ denotes the scaling factor of $\\varpi_{i}$ . The problem (P6) is a convex problem and can be directly solved. Thus, the rank-one solution $\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}$ can be obtained by carrying out the Algorithm 1.", + "bbox": [ + 73, + 366, + 491, + 425 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2) Inner layer iteration—subproblem with respect to $\\{\\mathbf{x}^p\\}$ : Note that the equality constraint $\\tilde{\\mathbf{W}} = \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})$ and $\\tilde{\\mathbf{V}} = \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})$ are equivalent to $\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})$ and $\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})$ . As a result, the problem (P6) can be transformed into", + "bbox": [ + 73, + 426, + 491, + 486 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\left(\\mathrm {P} 7\\right) \\min _ {\\mathbf {x} ^ {\\mathrm {p}}} \\| \\tilde {\\mathbf {w}} - \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| + \\| \\tilde {\\mathbf {v}} - \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| \\tag {18a}\n$$\n", + "text_format": "latex", + "bbox": [ + 122, + 494, + 488, + 516 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\text {s . t .} \\quad (1 1 c). \\end{array} \\tag {18b}\n$$\n", + "text_format": "latex", + "bbox": [ + 171, + 518, + 488, + 532 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "It is easy to notice that $x_{n}^{\\mathrm{p}}$ and $x_{m}^{\\mathrm{p}}$ ( $n \\neq m$ ) are separated in the objective function but coupled in the constraint (11c), which motivates us to adopt the elementwise optimization framework. Therefore, with the fixed $\\{x_{1}^{\\mathrm{p}}, \\dots, x_{n-1}^{\\mathrm{p}}, x_{n+1}^{\\mathrm{p}}, \\dots, x_{N}^{\\mathrm{p}}\\}$ , the subproblem with respect to $x_{n}^{\\mathrm{p}}$ is given by", + "bbox": [ + 73, + 541, + 491, + 633 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\min _ {x _ {n} ^ {\\mathrm {p}}} \\left| \\tilde {\\mathbf {w}} _ {[ n ]} - \\frac {e ^ {- J \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}}, \\varphi_ {\\mathrm {c}}\\right)} \\right| (P8) \\\\ + \\left| \\tilde {\\mathbf {v}} _ {[ n ]} - \\frac {e ^ {- \\mathcal {I} \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right| (19a) \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 112, + 638, + 488, + 720 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\ns. t. \\quad x _ {n - 1} ^ {\\mathrm {p}} + \\Delta x \\leq x _ {n} ^ {\\mathrm {p}} \\leq x _ {n + 1} ^ {\\mathrm {p}} - \\Delta x, \\tag {19b}\n$$\n", + "text_format": "latex", + "bbox": [ + 158, + 722, + 488, + 739 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {- L}{2} \\leq x _ {n} ^ {\\mathrm {p}} \\leq \\frac {L}{2}, \\tag {19c}\n$$\n", + "text_format": "latex", + "bbox": [ + 196, + 739, + 488, + 768 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Then, the optimal $x_{n}^{\\mathrm{p}}$ can be obtained by the low-complexity one-dimensional search.", + "bbox": [ + 73, + 775, + 488, + 804 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3) Outer layer iteration: In the outer layer, we initialise a large $\\varrho$ and update $\\varrho$ at each outer iteration by", + "bbox": [ + 73, + 806, + 491, + 837 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\varrho = \\varrho \\bar {c} _ {2}, \\tag {20}\n$$\n", + "text_format": "latex", + "bbox": [ + 251, + 845, + 488, + 861 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $0 < \\bar{c}_2 < 1$ is the iteration coefficient of the penalty terms. The penalty-based AO algorithm is summarized in Algorithm 2.", + "bbox": [ + 73, + 869, + 488, + 912 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The proposed penalty-based AO algorithm is summarized in Algorithm 2, which is assured to converge at least to a", + "bbox": [ + 73, + 914, + 491, + 945 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Algorithm 2 Penalty-based AO algorithm.", + "text_level": 1, + "bbox": [ + 506, + 66, + 795, + 80 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2: repeat", + "text_level": 1, + "bbox": [ + 516, + 114, + 581, + 127 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3: repeat", + "text_level": 1, + "bbox": [ + 516, + 128, + 598, + 142 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1: Parameter Initialization. Set the convergence accuracy $\\epsilon_{2}$ and $\\epsilon_{3}$ .", + "4: update $\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}$ by carrying out Algorithm 1.", + "5: update $\\mathbf{x}_{\\mathrm{p}}$ via the element-wise optimization.", + "6: until the objective value converges with an accuracy of $\\epsilon_{2}$ .", + "7: update $\\varrho = \\varrho \\bar{c}_2$ $(0 < \\bar{c}_2 < 1)$", + "8: until $\\| \\tilde{\\mathbf{W}} -\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} -\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}\\leq \\epsilon_{3}$" + ], + "bbox": [ + 516, + 83, + 919, + 234 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg", + "image_caption": [ + "Fig. 2. The illumination power versus the transmit power at the BS." + ], + "image_footnote": [], + "bbox": [ + 568, + 267, + 844, + 439 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "stationary point solution. The computational complexity of Algorithm 2 mainly depends on solving the SDP problems (P6) and the one-dimensional exhaustive search. It is given by $\\mathcal{O}\\Big(\\log (\\frac{1}{\\epsilon_3})\\log (\\frac{1}{\\epsilon_2})\\big[\\log (\\frac{1}{\\epsilon_1})N^{3.5} + N\\bar{Q}\\big]\\Big)$ [10], where $\\bar{Q}$ represents the number of the quantization bits during the one-dimensional exhaustive search.", + "bbox": [ + 503, + 479, + 921, + 573 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "IV. NUMERICAL RESULTS", + "text_level": 1, + "bbox": [ + 617, + 599, + 805, + 612 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This section evaluates the performance of the proposed PASS-ISAC framework. A 3D topological network setup is considered, where the dielectric waveguide is located in the x-o-z plane with a height of $d$ and a length of $50\\mathrm{m}$ . The communicating user and the sensing target are located in a square region centered at the origin in the x-o-y plane. Unless otherwise specified, the default simulation parameters are set as: $\\sigma^2 = -105$ dBm, $f = 28$ GHz, $d = 10\\mathrm{m}$ , $r_{\\mathrm{s}} = 30\\mathrm{m}$ , $\\varphi_{\\mathrm{s}} = \\frac{\\pi}{3}$ , $r_{\\mathrm{c}} = 15\\sqrt{2}\\mathrm{m}$ , $\\varphi_{\\mathrm{c}} = \\frac{5\\pi}{4}$ , $N = 16$ , $\\eta_{\\mathrm{eff}} = 1.4$ , $R_{\\mathrm{QoS}} = 10$ bps/Hz, $\\epsilon_1 = \\epsilon_2 = \\epsilon_3 = \\epsilon_4 = 10^{-3}$ , and $\\alpha_{\\mathrm{s}} = 1$ . The other network parameters are shown in the captions of the figures.", + "bbox": [ + 503, + 622, + 921, + 801 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To validate the performance of the proposed scheme, the following baseline schemes are considered in this paper:", + "bbox": [ + 503, + 804, + 919, + 835 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "- Conventional antenna: In this scheme, we deploy $N$ conventional uniform linear array (ULA) at the BS as the transmitting antenna with an antenna spacing of $\\frac{\\lambda}{2}$ . For fairness comparison, the transmitting antennas are connected to one RF chain and each antenna is associated with an analog phase shifter, which can be varied from 0 to $2\\pi$ .", + "bbox": [ + 519, + 839, + 921, + 943 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg", + "image_caption": [ + "Fig. 3. The illumination power versus the rotation angle of the dielectric waveguide, where $P_{\\mathrm{T}} = 70$ dBm." + ], + "image_footnote": [], + "bbox": [ + 138, + 79, + 413, + 250 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Fixed pinching antenna: In this scheme, $N$ pinching antennas are uniformly spread along the dielectric waveguide, where the in-waveguide and free-space channels are determined by the fixed positions of the pinching antennas.", + "- Semi-continuous activation: In the semi-continuous activation scheme, we assume there are $N$ pinching antennas uniformly distributed along the dielectric waveguide, which are predetermined and cannot be changed. However, the pinching antennas are allowed to be adjusted in a small-scale range to alter the phase-shift response of the pinching beamforming, which has a negligible impact on the large-scale path loss." + ], + "bbox": [ + 91, + 292, + 491, + 488 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In Fig. 2, we can observe that the pinching antenna achieves the highest illumination power compared to the other baseline schemes. This result can be expected because, compared with the baseline schemes, pinching antennas can be flexibly repositioned to attenuate the large-scale path loss between the pinching antennas and the receiving ends. Thus, more spatial degrees-of-freedom (DoFs) are provided to favor the communication and sensing performance. On the other hand, although the semi-continuous activation scheme cannot reduce the path loss by adjusting the antenna position over a wide range, it exhibits superior performance to the conventional antenna scheme because pinching antennas are spread over the entire communication/sensing area, which averagely closer to the receiving ends.", + "bbox": [ + 73, + 491, + 490, + 702 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Fig. 3 depicts the relationship between the illumination power and the number of activated pinching antennas, with a comparison of the proportional power allocation model. For fairness comparison, $\\alpha_{\\mathrm{s}} = 0.9$ for two power allocation models. As can be observed, the illumination power increases as the number of pinching antennas increases, which is because an increasing number of pinching antennas can improve the beam resolution and reduce the power leakage in irrelevant regions, thereby raising the illumination power at the target. It is also observed that the proportional power allocation is slightly inferior to the equal power allocation model, which verifies the effectiveness of the pinching antennas based on proportional power allocation model in reconfiguring signal propagation.", + "bbox": [ + 73, + 703, + 491, + 912 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Fig. 4 investigates the impact of the rotation angle of the dielectric waveguide on illumination power at the target.", + "bbox": [ + 73, + 914, + 491, + 946 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg", + "image_caption": [ + "Fig. 4. The illumination power versus the rotation angle of the dielectric waveguide, where $P_{\\mathrm{T}} = 70$ dBm." + ], + "image_footnote": [], + "bbox": [ + 568, + 79, + 846, + 251 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Here, we assume the dielectric waveguide can be rotated in a clockwise direction parallel to the x-o-y plane, where the rotation angle is defined as the angle entwined by the dielectric waveguide and the x-axis. From Fig. 4, it is shown that the illumination power first increases and then decreases as the rotation angle grows. This is due to the fact that when the rotation angle is $60^{\\circ}$ , the target is located underneath the dielectric waveguide, and it receives the maximal illumination power. As the rotation angle further rises, the distance between the target and the pinching antenna becomes large, so the illumination power gradually decreases. In addition, raising the height of the dielectric waveguide increases the average distance from the pinching antennas to the user and target, thus, the illumination power decreases as $d$ increases.", + "bbox": [ + 501, + 292, + 921, + 503 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "V. CONCLUSION", + "text_level": 1, + "bbox": [ + 651, + 525, + 772, + 539 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "A novel PASS-ISAC framework has been proposed, where the pinching beamforming was exploited to realize the simultaneous C&S transmission. A separated ISAC design was proposed for the two-waveguide PASS. A penalty-based AO algorithm was proposed to maximize the illumination power at the target while guaranteeing the QoS requirement of the communication user. Simulation results were provided to verify the superiority of the proposed PASS-ISAC framework over the other baseline schemes.", + "bbox": [ + 503, + 546, + 921, + 681 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "REFERENCES", + "text_level": 1, + "bbox": [ + 666, + 691, + 764, + 705 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] L. Zhu, W. Ma, and R. Zhang, \"Movable antennas for wireless communication: Opportunities and challenges,\" IEEE Commun. Mag., vol. 62, no. 6, pp. 114-120, Jun. 2024.", + "[2] W. K. New, K.-K. Wong et al., \"A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hardware designs,\" IEEE Commun. Surv. Tut., pp. 1-1, 2024.", + "[3] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022.", + "[4] Z. Ding, R. Schober, and H. Vincent Poor, \"Flexible-antenna systems: A pinching-antenna perspective,\" IEEE Trans. Commun., pp. 1-1, 2025.", + "[5] F. Liu, Y. Cui et al., \"Integrated sensing and communications: Toward dual-functional wireless networks for 6G and beyond,\" IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1728-1767, Jun. 2022.", + "[6] Y. Liu, Z. Wang, X. Mu, C. Ouyang, X. Xu, and Z. Ding, “Pinching antenna systems (PASS): Architecture designs, opportunities, and outlook,” arXiv preprint arXiv:2501.18409, 2025.", + "[7] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025." + ], + "bbox": [ + 513, + 715, + 921, + 943 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[8] W. Hao, H. Shi et al., \"Joint beamforming design for active RIS-aided THz ISAC systems with delay alignment modulation,\" IEEE Wireless Communications Letters, vol. 12, no. 10, pp. 1816-1820, Oct. 2023.", + "[9] T. Jiang and Y. Shi, \"Over-the-air computation via intelligent reflecting surfaces,\" in Proc. IEEE Global Commun. Conf. (GLOBECOM), Waikoloa, HI, USA. Dec. 2019, pp. 1-6.", + "[10] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 20-34, May. 2010." + ], + "bbox": [ + 78, + 71, + 490, + 174 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 911, + 31, + 919, + 39 + ], + "page_idx": 5 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_model.json b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_model.json new file mode 100644 index 0000000000000000000000000000000000000000..71f182775286376e4e3bbaf0a12cf18ed947bcf3 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_model.json @@ -0,0 +1,1708 @@ +[ + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "1" + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.07, + 0.856, + 0.12 + ], + "angle": 0, + "content": "Integrated Sensing and Communications for Pinching-Antenna Systems (PASS)" + }, + { + "type": "text", + "bbox": [ + 0.173, + 0.128, + 0.798, + 0.145 + ], + "angle": 0, + "content": "Zheng Zhang, Zhaolin Wang, Xidong Mu Bingtao He, Jian Chen, and Yuanwei Liu" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.165, + 0.493, + 0.33 + ], + "angle": 0, + "content": "Abstract—An integrated sensing and communication (ISAC) design for pinching antenna systems (PASS) is proposed, where the pinching antennas are deployed to establish reliable line-of-sight communication and sensing links. More particularly, a separated ISAC design is proposed for the two-waveguide PASS, where one waveguide is used to emit the information-bearing signals for ISAC transmission while the other waveguide is used to receive the reflected echo signals. Based on this framework, a penalty-based alternating optimization algorithm is proposed to maximize the illumination power as well as ensure the communication quality-of-service requirement. Numerical results demonstrate that the proposed PASS-ISAC scheme outperforms the conventional antenna scheme." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.336, + 0.492, + 0.363 + ], + "angle": 0, + "content": "Index Terms—Beamforming design, integrated sensing and communication, pinching antenna systems." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.389, + 0.352, + 0.403 + ], + "angle": 0, + "content": "I. INTRODUCTION" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.411, + 0.493, + 0.759 + ], + "angle": 0, + "content": "Fuelled by the burgeoning demands for massive data transmission and pervasive network coverage, flexible antennas have emerged as a promising technique for sixth-generation (6G) cellular systems. Benefiting from their ability to reconfigure the wireless channel, flexible antennas can significantly enhance the throughput of wireless networks. However, traditional flexible antennas (e.g., movable antennas [1] and fluid antennas [2]) merely permit the adjustment of the antenna position within a range of orders of magnitude comparable to the carrier wavelength. Against this backdrop, the pinching antenna has emerged [3], which is a type of dielectric waveguide-based leaky wave antenna. By applying dielectric particles to a particular point on the dielectric waveguide, a pinching antenna can be activated to establish EM radiation fields and form a communication area [4]. Then, the EM signal inside the dielectric waveguide will be radiated from the pinching antenna to free space with a defined phase shift adjustment (referred to as the pinching beamformer). Notably, as the dielectric waveguide can be pinched at any position to radiate radio waves, the pinching antenna can flexibly move along the dielectric waveguide over a length of dozens of meters, thereby relocating to the closest position to the receiver and creating reliable LoS links." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.759, + 0.492, + 0.805 + ], + "angle": 0, + "content": "To enable emerging applications, such as autonomous driving, extended reality, and the Metaverse, sensing functionality is recognized as an important indicator of future networks." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.818, + 0.493, + 0.864 + ], + "angle": 0, + "content": "Zheng Zhang, Bingtao He, and Jian Chen are with the School of Telecommunications Engineering, Xidian University, Xi'an 710071, China (e-mail: zhang_688@stu.xidian.edu.cn; bthe@xidian.edu.cn; jianchen@mail.xidian.edu.cn)." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.864, + 0.493, + 0.898 + ], + "angle": 0, + "content": "Zhaolin Wang is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: zhaolin.wang@qmul.ac.uk)." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.898, + 0.493, + 0.921 + ], + "angle": 0, + "content": "Xidong Mu is with Queen's University Belfast, Belfast, BT3 9DT, U.K. (email: x.mu@qub.ac.uk)" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.921, + 0.493, + 0.946 + ], + "angle": 0, + "content": "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk)." + }, + { + "type": "list", + "bbox": [ + 0.074, + 0.818, + 0.493, + 0.946 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.165, + 0.915, + 0.32 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.505, + 0.329, + 0.759, + 0.343 + ], + "angle": 0, + "content": "Fig. 1. The separated ISAC design for PASS." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.359, + 0.922, + 0.647 + ], + "angle": 0, + "content": "In pursuit of this vision, the integrated sensing and communication (ISAC) technology has drawn significant attention recently [5], which aims to leverage the cellular network hardware platforms and dedicated signal processing algorithms to achieve the incorporation of communication and sensing functionalities. Recently, it has been claimed that conducting ISAC transmission in the pinching antenna systems (PASS) can further upgrade the communication and sensing (C&S) performance of the network [6]. On the one hand, the pinching antenna can be flexibly repositioned to augment the echo signal energy. On the other hand, the wide-range mobility characteristic of pinching antennas results in an antenna aperture spanning dozens of meters. It inherently enables nearfield sensing, e.g., the possibility of simultaneous angular and distance information estimation and even target velocity sensing, thereby offering a more comprehensive and accurate sensing of the surrounding environment. Nevertheless, as of the present moment, research in the PASS-ISAC remains conspicuously absent." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.647, + 0.922, + 0.859 + ], + "angle": 0, + "content": "Motivated by the above, this paper proposes a separated ISAC design for PASS. To elaborate, the base station (BS) is connected with two dielectric waveguides, where one waveguide is used to transmit the downlink signals, while the other is employed to collect the reflected echo signals from the target. We aim to maximize the illumination power at the target while satisfying the quality-of-service (QoS) requirement of the communication user by optimizing the pinching beamforming offered by the mobility of pinching antennas. A penalty-based alternating optimization (AO) algorithm is proposed to handle the non-convex optimization problem, where the positions of pinching antennas are updated in an element-wise manner. Numerical results evaluate the superiority of the proposed scheme over the baseline schemes." + }, + { + "type": "title", + "bbox": [ + 0.532, + 0.879, + 0.895, + 0.893 + ], + "angle": 0, + "content": "II. SYSTEM MODEL AND PROBLEM FORMULATION" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.899, + 0.922, + 0.947 + ], + "angle": 0, + "content": "As shown in Fig. 1, we consider a PASS-ISAC system, where a dual-function BS conveys with a single-antenna communication user while sensing a point-like target. The" + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.241, + 0.059, + 0.681 + ], + "angle": 270, + "content": "arXiv:2504.07709v3 [cs.IT] 12 May 2025" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "2" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.069, + 0.493, + 0.205 + ], + "angle": 0, + "content": "BS is connected with two dielectric waveguides of length \\( L \\), each of which consists of \\( N \\) pinching antennas. To achieve the simultaneous C&S transmission, a separated ISAC design is proposed. Specifically, the downlink information-bearing signals are emitted from one waveguide (referred to as transmitting antennas). Then, the reflected echoes from the target would be collected at the other waveguide (referred to as receiving antennas), which are further transmitted to the BS for parameter estimation." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.206, + 0.495, + 0.434 + ], + "angle": 0, + "content": "A three-dimensional (3D) coordination system is considered, where two dielectric waveguides extended from the BS are assumed to be parallel to the x-axis with respect to the x-o-y plane at a height \\(d\\). The position of the \\(n\\)-th pinching antenna distributed along the transmitting and receiving dielectric waveguides can be denoted as \\(\\psi_{n}^{\\mathrm{p}} = (x_{n}^{\\mathrm{p}},0,d)\\) and \\(\\psi_{n}^{\\mathrm{q}} = (x_{n}^{\\mathrm{q}},y^{\\mathrm{q}},d)\\). The communication user and sensing target are located in the x-o-y plane. Let \\(r_{\\mathrm{c}}\\) and \\(\\varphi_{\\mathrm{c}}\\) denote the distance and the azimuth angle of the communication user relative to the origin of the coordinate system. Thus, the coordinates of communication user is given by \\(\\psi^{\\mathrm{c}} = (r_{\\mathrm{c}}\\cos \\varphi_{\\mathrm{c}},r_{\\mathrm{c}}\\sin \\varphi_{\\mathrm{c}},0)\\). Similarly, the target is located in \\(\\psi^{\\mathrm{s}} = (r_{\\mathrm{s}}\\cos \\varphi_{\\mathrm{s}},r_{\\mathrm{s}}\\sin \\varphi_{\\mathrm{s}},0)\\). Furthermore, we assume the target is a static node or moves at a low speed. Thus, the Doppler effect is neglected in this work." + }, + { + "type": "title", + "bbox": [ + 0.075, + 0.451, + 0.208, + 0.464 + ], + "angle": 0, + "content": "A. Channel Model" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.47, + 0.493, + 0.606 + ], + "angle": 0, + "content": "In the considered network, the pinching antennas are non-uniformly disposed on the dielectric waveguide covering the entire range of the user's activity, which implies that the aperture of the pinching antennas may have the same order of magnitude as the signal transmission distance. Without loss of accuracy, we adopt the spherical-wave-based nearfield channel model, where only the LoS path is considered. Consequently, the distance from the \\(n\\)-th pinching antenna to the target is given by" + }, + { + "type": "equation", + "bbox": [ + 0.077, + 0.612, + 0.49, + 0.671 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} r _ {n} ^ {\\zeta} \\left(r _ {\\zeta}, \\varphi_ {\\zeta}\\right) = \\left\\| \\psi^ {\\zeta} - \\psi_ {n} ^ {\\mathrm {p}} \\right\\| \\\\ = \\sqrt {r _ {\\zeta} ^ {2} - 2 r _ {\\zeta} \\cos \\varphi_ {\\zeta} x _ {n} ^ {\\mathrm {p}} + \\left(x _ {n} ^ {\\mathrm {p}}\\right) ^ {2} + d ^ {2}}, \\quad \\zeta \\in \\{\\mathrm {s}, \\mathrm {c} \\}, \\tag {1} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.678, + 0.493, + 0.723 + ], + "angle": 0, + "content": "Thus, the free space channel vector from the transmitting antennas to the target and the communication user can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.085, + 0.728, + 0.49, + 0.772 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {\\mathrm {s}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {H}, \\tag {2}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.086, + 0.785, + 0.49, + 0.83 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {\\mathrm {c}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})} \\right] ^ {H}, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.835, + 0.493, + 0.898 + ], + "angle": 0, + "content": "where \\(\\mathbf{x}^{\\mathrm{p}} = [x_1^{\\mathrm{p}},\\dots ,x_N^{\\mathrm{p}}]\\) denotes the coordinates of pinching antennas, \\(\\lambda = \\frac{c}{f_{\\mathrm{c}}}\\) denotes the wavelength, \\(f_{\\mathrm{c}}\\) is the frequency of the carrier wave, \\(\\eta = \\frac{c^2}{16\\pi^2f_c^2}\\), and \\(c\\) denotes the speed of light." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.899, + 0.492, + 0.947 + ], + "angle": 0, + "content": "In this paper, the BS aims to utilize the communication signal to achieve simultaneous communication and target sensing. Consider a coherent time block of length \\( T \\), the" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.069, + 0.923, + 0.19 + ], + "angle": 0, + "content": "communication channel condition and the sensing parameters are assumed to remain unchanged during one coherent time block. Thus, the emitted signal at the \\(t\\)-th time slot is given by \\(s(t) \\in \\mathbb{C}\\), which is assumed to be normalized and independently distributed, i.e., \\(\\mathbb{E}\\{|s(t)|^2\\} = 1\\) and \\(\\mathbb{E}\\{s(t)s^*(\\bar{t})\\} = 0\\). On receiving \\(s(t)\\), the dielectric waveguide radiates the signal \\(\\mathbf{x}(t) = \\sqrt{P_{\\mathrm{T}}} \\mathbf{g}(\\mathbf{x}^{\\mathrm{p}}) s(t)\\), where \\(\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})\\) denotes the in-waveguide channel and can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.572, + 0.199, + 0.922, + 0.221 + ], + "angle": 0, + "content": "\\[\n\\mathbf {g} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\sqrt {\\alpha_ {1}} e ^ {- \\jmath \\theta_ {1}}, \\dots , \\sqrt {\\alpha_ {N}} e ^ {- \\jmath \\theta_ {N}} \\right] ^ {T}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.23, + 0.923, + 0.424 + ], + "angle": 0, + "content": "where \\(\\theta_{n}\\) denotes the radiation phase shift at the \\(n\\)-th pinching antenna, and \\(P_{\\mathrm{T}}\\) denotes the transmit power at the BS. \\(\\alpha_{n}\\) denotes the power allocation coefficients at the \\(n\\)-th pinching antenna, which can be modeled as the equal power allocation model \\(\\sqrt{\\alpha_n} = \\sqrt{\\frac{\\alpha_s}{N}}\\) [4] or the proportional power allocation model \\(\\sqrt{\\alpha_n} = \\delta (\\sqrt{1 - \\delta^2})^{n - 1}\\) [7]. \\(\\delta = \\sqrt{1 - (1 - \\alpha_s)^{\\frac{1}{N}}}\\) represents the proportional coefficient, and \\(\\alpha_{s} = \\sum_{n = 1}^{N}\\alpha_{n}\\) denotes the radiation coefficient of pinching antennas. For ease of implementation, the equal power allocation model is considered in this paper. \\(\\theta_{n}\\) is defined by \\(2\\pi \\eta_{\\mathrm{eff}}\\frac{\\|\\psi_0^{\\mathrm{p}} - \\psi_n^{\\mathrm{p}}\\|}{\\lambda}\\), where \\(\\psi_0^{\\mathrm{p}}\\) denotes the location of the feed point, and \\(\\eta_{\\mathrm{eff}}\\) denotes the effective refractive index of the dielectric waveguide." + }, + { + "type": "title", + "bbox": [ + 0.505, + 0.448, + 0.624, + 0.463 + ], + "angle": 0, + "content": "B. Signal Model" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.469, + 0.923, + 0.652 + ], + "angle": 0, + "content": "With the above channel model, it is readily observed that the positions of pinching antennas have a significant impact on both the free space channel \\(\\{\\mathbf{h}_{\\mathrm{s}}(\\mathbf{x}^{\\mathrm{p}}), \\mathbf{h}_{\\mathrm{c}}(\\mathbf{x}^{\\mathrm{p}})\\}\\) and the in-waveguide channel \\(\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})\\). As a result, it becomes possible to establish favorable wireless propagation while manipulating the radiated characteristics of signals by altering the positions of pinching antennas in the PASS. To characterize the two aspects of the signal reconfiguration capabilities of pinching antennas, we refer to it as pinching beamforming in this paper. Let \\(\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\) denote the pinching beamforming for the communication user and the sensing target, which are also the functions of \\(\\mathbf{x}^{\\mathrm{p}}\\). \\(\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\) are given by" + }, + { + "type": "equation", + "bbox": [ + 0.52, + 0.659, + 0.921, + 0.705 + ], + "angle": 0, + "content": "\\[\n\\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)}, \\dots , \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)} \\right] ^ {T}, \\tag {5}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.523, + 0.723, + 0.921, + 0.768 + ], + "angle": 0, + "content": "\\[\n\\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {T}. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.775, + 0.922, + 0.851 + ], + "angle": 0, + "content": "In this paper, we consider an ideal activation model of the pinching antenna, i.e., continuous activation. It indicates that the pinching antennas can be activated at any position of the dielectric waveguide. Thus, the positions of pinching antennas satisfy" + }, + { + "type": "equation", + "bbox": [ + 0.51, + 0.858, + 0.921, + 0.906 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X} = \\left\\{\\left| x _ {n} ^ {\\mathrm {p}} - x _ {m} ^ {\\mathrm {p}} \\right| \\geq \\Delta x (n \\neq m), x _ {n} ^ {\\mathrm {p}} \\in \\left[ - \\frac {L}{2}, \\frac {L}{2} \\right] \\right\\}, \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.915, + 0.922, + 0.946 + ], + "angle": 0, + "content": "where \\(\\Delta x\\) represents the minimum antenna space between two adjacent pinching antennas." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.07, + 0.493, + 0.116 + ], + "angle": 0, + "content": "1) Communication Performance Metric: With the aforementioned signal model, the received signals at the communication user are given by" + }, + { + "type": "equation", + "bbox": [ + 0.155, + 0.12, + 0.49, + 0.161 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} y (t) = \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {c}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) + n (t) \\\\ = \\sqrt {P _ {\\mathrm {T}}} \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) s (t) + n (t), \\tag {8} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.167, + 0.492, + 0.23 + ], + "angle": 0, + "content": "where \\(\\pmb {\\eta} = [\\eta^{\\frac{1}{2}},\\dots ,\\eta^{\\frac{1}{2}}]^{T}\\in \\mathbb{C}^{N\\times 1}\\) is a constant vector, and \\(n(t)\\sim \\mathcal{CN}(0,\\sigma^2)\\) denotes the additive white Gaussian noise (AWGN) at the communication user. Hence, the achievable rate of the communication user is given by" + }, + { + "type": "equation", + "bbox": [ + 0.166, + 0.234, + 0.492, + 0.269 + ], + "angle": 0, + "content": "\\[\nR = \\log_ {2} \\left(1 + \\frac {P _ {\\mathrm {T}} \\left| \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\right| ^ {2}}{\\sigma^ {2}}\\right). \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.273, + 0.492, + 0.349 + ], + "angle": 0, + "content": "2) Sensing Performance Metric: For target sensing, we adopt the illumination power as the performance metric, which characterizes the received sensing signal power at the target [8]. Thus, the illumination power with respect to azimuth angle \\(\\varphi_{\\mathrm{s}}\\) and distance \\(r_{\\mathrm{s}}\\) is given by" + }, + { + "type": "equation", + "bbox": [ + 0.158, + 0.354, + 0.49, + 0.407 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} P _ {\\mathrm {s}} = \\mathbb {E} \\left\\{\\left| \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {s}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) \\right| ^ {2} \\right\\} \\\\ = P _ {\\mathrm {T}} \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\mathbf {v} ^ {H} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\boldsymbol {\\eta}. \\tag {10} \\\\ \\end{array}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.076, + 0.424, + 0.247, + 0.437 + ], + "angle": 0, + "content": "C. Problem Formulation" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.443, + 0.492, + 0.503 + ], + "angle": 0, + "content": "In this paper, we aim to maximize the illumination power \\( P(\\theta_{\\mathrm{s}}, r_{\\mathrm{s}}) \\) by designing the pinching beamformer, under the transmit power budget and communication QoS requirement, which is given by" + }, + { + "type": "equation", + "bbox": [ + 0.223, + 0.51, + 0.49, + 0.531 + ], + "angle": 0, + "content": "\\[\n\\left(\\mathrm {P} 1\\right) \\quad \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} P _ {\\mathrm {s}} \\tag {11a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.234, + 0.534, + 0.49, + 0.55 + ], + "angle": 0, + "content": "\\[\n\\text {s . t .} \\quad R \\geq R _ {\\mathrm {Q o S}}, \\tag {11b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.268, + 0.552, + 0.49, + 0.568 + ], + "angle": 0, + "content": "\\[\n\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X}, \\tag {11c}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.577, + 0.492, + 0.623 + ], + "angle": 0, + "content": "where \\( R_{\\mathrm{QoS}} \\) denotes the QoS requirement of the communication user. The problem (P1) is challenging to solve due to the quadratic objective function and the coupled variables." + }, + { + "type": "title", + "bbox": [ + 0.12, + 0.638, + 0.449, + 0.652 + ], + "angle": 0, + "content": "III. PINCHING BEAMFORMING OPTIMIZATION" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.657, + 0.492, + 0.731 + ], + "angle": 0, + "content": "In this section, we focus on the C&S transmission design by optimizing the pinching beamforming. To deal with the coupled optimization variables, a penalty-based AO algorithm is proposed, where \\(\\{\\mathbf{x}^{\\mathrm{p}}\\}\\) is optimized in an element-wise manner." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.732, + 0.492, + 0.762 + ], + "angle": 0, + "content": "To facilitate the optimization, we can rewrite the problem (P1) as" + }, + { + "type": "equation", + "bbox": [ + 0.161, + 0.768, + 0.49, + 0.791 + ], + "angle": 0, + "content": "\\[\n\\left(\\mathrm {P} 2\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} | \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\tag {12a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.211, + 0.792, + 0.49, + 0.811 + ], + "angle": 0, + "content": "\\[\n\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {12b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.247, + 0.813, + 0.49, + 0.829 + ], + "angle": 0, + "content": "\\[\n(1 1 c), \\tag {12c}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.836, + 0.232, + 0.856 + ], + "angle": 0, + "content": "where \\(\\gamma_{\\mathrm{QoS}} = \\frac{2^{R_{\\mathrm{QoS}} - 1}}{P_{\\mathrm{T}}}\\)" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.855, + 0.492, + 0.947 + ], + "angle": 0, + "content": "In order to deal with the intractable objective and constraints, we consider a penalty-based two-layer framework. To elaborate, we introduce auxiliary variables \\(\\tilde{\\mathbf{w}}\\) and \\(\\tilde{\\mathbf{v}}\\) to replace \\(\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\), respectively. Thus, we have the equality constraints \\(\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\). By relocating the equality constraint to the objective function and serving as a" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.07, + 0.921, + 0.098 + ], + "angle": 0, + "content": "penalty term, the problem (P2) can be equivalently rewritten as" + }, + { + "type": "equation", + "bbox": [ + 0.577, + 0.107, + 0.921, + 0.137 + ], + "angle": 0, + "content": "\\[\n\\left(\\mathrm {P} 3\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}} | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {v}} | ^ {2} - \\frac {1}{2 \\varrho} \\chi_ {1} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}\\right) \\tag {13a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.636, + 0.138, + 0.921, + 0.157 + ], + "angle": 0, + "content": "\\[\n\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {w}} | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {13b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.672, + 0.158, + 0.921, + 0.191 + ], + "angle": 0, + "content": "\\[\n\\left| \\tilde {\\mathbf {w}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {c}} ^ {2}}, \\tag {13c}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.672, + 0.192, + 0.921, + 0.224 + ], + "angle": 0, + "content": "\\[\n\\left| \\tilde {\\mathbf {v}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {s}} ^ {2}}, \\tag {13d}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.671, + 0.228, + 0.921, + 0.243 + ], + "angle": 0, + "content": "\\[\n(1 1 \\mathrm {c}), \\tag {13e}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.254, + 0.922, + 0.406 + ], + "angle": 0, + "content": "where \\(\\chi_{1}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}) = \\| \\tilde{\\mathbf{w}} -\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\| +\\| \\tilde{\\mathbf{v}} -\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\|\\) and \\(\\varrho\\) denotes the scaling factor of the penalty terms. Note that to avoid the infinite objective value, we introduce constraints (13c) and (13d), where \\(r_{\\min ,\\mathrm{c}} = \\sqrt{(r_{\\mathrm{c}}\\sin\\varphi_{\\mathrm{c}})^2 + d^2}\\) and \\(r_{\\min ,\\mathrm{s}} = \\sqrt{(r_{\\mathrm{s}}\\sin\\varphi_{\\mathrm{s}})^2 + d^2}\\) denote the lower bounds of the distances from an arbitrary pinching antenna to the communication user and target. The problem (P3) is equivalent to the problem (P1) as constraints (13c) and (13d) can be obtained from the (11c), which restricts pinching beamforming \\(\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}\\) to the feasible region." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.407, + 0.921, + 0.438 + ], + "angle": 0, + "content": "To address the quadratic objective and constraints, we apply the SDR technique to rewrite the problem (P3) as follows." + }, + { + "type": "equation", + "bbox": [ + 0.54, + 0.447, + 0.921, + 0.478 + ], + "angle": 0, + "content": "\\[\n\\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {14a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.55, + 0.481, + 0.921, + 0.514 + ], + "angle": 0, + "content": "\\[\n\\text {s . t .} \\quad \\tilde {\\mathbf {W}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , c} ^ {2}}, \\tag {14b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.515, + 0.921, + 0.547 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\mathbf {V}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , s} ^ {2}}, \\tag {14c}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.549, + 0.921, + 0.568 + ], + "angle": 0, + "content": "\\[\n\\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {W}}\\right) \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {14d}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.57, + 0.921, + 0.587 + ], + "angle": 0, + "content": "\\[\n\\operatorname {r a n k} (\\tilde {\\mathbf {W}}) = 1, \\operatorname {r a n k} (\\tilde {\\mathbf {V}}) = 1, \\tag {14e}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.589, + 0.921, + 0.607 + ], + "angle": 0, + "content": "\\[\n\\tilde {\\mathbf {W}} \\succeq \\mathbf {0}, \\tilde {\\mathbf {V}} \\succeq \\mathbf {0}, \\tag {14f}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.586, + 0.611, + 0.921, + 0.627 + ], + "angle": 0, + "content": "\\[\n(1 1 \\mathrm {c}), \\tag {14g}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.638, + 0.922, + 0.729 + ], + "angle": 0, + "content": "where \\(\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{w}^{H}(\\mathbf{x}^{\\mathrm{p}})\\), \\(\\tilde{\\mathbf{W}} = \\tilde{\\mathbf{w}}\\tilde{\\mathbf{w}}^{H}\\), \\(\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{v}^{H}(\\mathbf{x}^{\\mathrm{p}})\\), \\(\\tilde{\\mathbf{V}} = \\tilde{\\mathbf{v}}\\tilde{\\mathbf{v}}^{H}\\), and \\(\\chi_{2}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}) = \\| \\tilde{\\mathbf{W}} - \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} - \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}\\). To solve the problem (P4), we propose a penalty-based AO algorithm, which alternately optimizes \\(\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}\\) and \\(\\{\\mathbf{x}^{\\mathrm{p}}\\}\\) in the inner layer and updates \\(\\varrho\\) in the outer layer." + }, + { + "type": "text", + "bbox": [ + 0.505, + 0.732, + 0.921, + 0.775 + ], + "angle": 0, + "content": "1) Inner layer iteration—subproblem with respect to \\(\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}\\): With the fixed \\(\\{\\mathbf{x}^{\\mathrm{p}}\\}\\), the problem (P4) is reduced to" + }, + { + "type": "equation", + "bbox": [ + 0.551, + 0.783, + 0.921, + 0.816 + ], + "angle": 0, + "content": "\\[\n\\left(\\mathrm {P} 5\\right) \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {15a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.591, + 0.819, + 0.921, + 0.835 + ], + "angle": 0, + "content": "\\[\n\\text {s . t .} \\quad (1 4 b) - (1 4 f). \\tag {15b}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.847, + 0.922, + 0.893 + ], + "angle": 0, + "content": "To handle the rank-one constraint, we introduce non-negative auxiliary variables \\(\\{\\varpi_1,\\varpi_2\\}\\) and employ the difference-of-convex (DC) relaxation method [9] to rewrite the (14c) as" + }, + { + "type": "equation", + "bbox": [ + 0.524, + 0.902, + 0.921, + 0.942 + ], + "angle": 0, + "content": "\\[\n\\left\\{ \\begin{array}{l} \\Re (\\operatorname {T r} (\\tilde {\\mathbf {W}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {w}} _ {\\max } \\tilde {\\mathbf {w}} _ {\\max } ^ {H}))) \\leq \\varpi_ {1}, \\\\ \\Re (\\operatorname {T r} (\\tilde {\\mathbf {V}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {v}} _ {\\max } \\tilde {\\mathbf {v}} _ {\\max } ^ {H}))) \\leq \\varpi_ {2}, \\end{array} \\quad i \\in \\{1, 2 \\}, \\right. \\tag {16}\n\\]" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "4" + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.067, + 0.446, + 0.082 + ], + "angle": 0, + "content": "Algorithm 1 Iterative algorithm for rank-one solution." + }, + { + "type": "text", + "bbox": [ + 0.086, + 0.084, + 0.49, + 0.1 + ], + "angle": 0, + "content": "1: Initialize \\(\\tilde{\\mathbf{v}}_{\\mathrm{max}}\\) and \\(\\tilde{\\mathbf{w}}_{\\mathrm{max}}\\). Set a convergence accuracy \\(\\epsilon_{1}\\)." + }, + { + "type": "title", + "bbox": [ + 0.086, + 0.115, + 0.153, + 0.128 + ], + "angle": 0, + "content": "2: repeat" + }, + { + "type": "text", + "bbox": [ + 0.086, + 0.128, + 0.456, + 0.145 + ], + "angle": 0, + "content": "3: update \\(\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}},\\varpi_i\\}\\) by solving the problem (P6)." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.145, + 0.378, + 0.16 + ], + "angle": 0, + "content": "4: update the eigenvectors \\(\\{\\tilde{\\mathbf{w}}_{\\mathrm{max}},\\tilde{\\mathbf{v}}_{\\mathrm{max}}\\}\\)" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.16, + 0.332, + 0.175 + ], + "angle": 0, + "content": "5: update \\(\\varrho_{i} = \\varrho_{i}\\bar{c}_{1}\\) \\((0 < \\bar{c}_1 < 1)\\)" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.174, + 0.413, + 0.19 + ], + "angle": 0, + "content": "6: until \\(\\sum_{i=1}^{2} \\varpi_i\\) falls below a threshold of \\(\\epsilon_1\\)." + }, + { + "type": "list", + "bbox": [ + 0.086, + 0.128, + 0.456, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.214, + 0.493, + 0.259 + ], + "angle": 0, + "content": "where \\(\\tilde{\\mathbf{w}}_{\\mathrm{max}}\\) and \\(\\tilde{\\mathbf{v}}_{\\mathrm{max}}\\) represent the eigenvectors corresponding to the maximum eigenvalues of \\(\\tilde{\\mathbf{W}}\\) and \\(\\tilde{\\mathbf{V}}\\), respectively. As a result, the problem (P5) can be transformed into" + }, + { + "type": "equation", + "bbox": [ + 0.08, + 0.266, + 0.49, + 0.319 + ], + "angle": 0, + "content": "\\[\n\\left. \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}, \\varpi_ {i}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) - \\sum_ {i = 1} ^ {2} \\frac {1}{2 \\varrho_ {i}} \\varpi_ {i} \\right. \\tag {17a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.148, + 0.323, + 0.49, + 0.34 + ], + "angle": 0, + "content": "\\[\ns. t. \\quad \\varpi_ {i} \\geq 0, i \\in \\{1, 2 \\}, \\tag {17b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.184, + 0.343, + 0.49, + 0.358 + ], + "angle": 0, + "content": "\\[\n(1 4 b) - (1 4 f), (1 6), \\tag {17c}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.367, + 0.492, + 0.426 + ], + "angle": 0, + "content": "where \\(\\varrho_{i}\\) denotes the scaling factor of \\(\\varpi_{i}\\). The problem (P6) is a convex problem and can be directly solved. Thus, the rank-one solution \\(\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}\\) can be obtained by carrying out the Algorithm 1." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.427, + 0.492, + 0.487 + ], + "angle": 0, + "content": "2) Inner layer iteration—subproblem with respect to \\(\\{\\mathbf{x}^p\\}\\): Note that the equality constraint \\(\\tilde{\\mathbf{W}} = \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\tilde{\\mathbf{V}} = \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\) are equivalent to \\(\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\) and \\(\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\). As a result, the problem (P6) can be transformed into" + }, + { + "type": "equation", + "bbox": [ + 0.124, + 0.495, + 0.49, + 0.517 + ], + "angle": 0, + "content": "\\[\n\\left(\\mathrm {P} 7\\right) \\min _ {\\mathbf {x} ^ {\\mathrm {p}}} \\| \\tilde {\\mathbf {w}} - \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| + \\| \\tilde {\\mathbf {v}} - \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| \\tag {18a}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.172, + 0.52, + 0.49, + 0.534 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\text {s . t .} \\quad (1 1 c). \\end{array} \\tag {18b}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.542, + 0.493, + 0.634 + ], + "angle": 0, + "content": "It is easy to notice that \\( x_{n}^{\\mathrm{p}} \\) and \\( x_{m}^{\\mathrm{p}} \\) (\\( n \\neq m \\)) are separated in the objective function but coupled in the constraint (11c), which motivates us to adopt the elementwise optimization framework. Therefore, with the fixed \\( \\{x_{1}^{\\mathrm{p}}, \\dots, x_{n-1}^{\\mathrm{p}}, x_{n+1}^{\\mathrm{p}}, \\dots, x_{N}^{\\mathrm{p}}\\} \\), the subproblem with respect to \\( x_{n}^{\\mathrm{p}} \\) is given by" + }, + { + "type": "equation", + "bbox": [ + 0.114, + 0.64, + 0.49, + 0.722 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\min _ {x _ {n} ^ {\\mathrm {p}}} \\left| \\tilde {\\mathbf {w}} _ {[ n ]} - \\frac {e ^ {- J \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}}, \\varphi_ {\\mathrm {c}}\\right)} \\right| (P8) \\\\ + \\left| \\tilde {\\mathbf {v}} _ {[ n ]} - \\frac {e ^ {- \\mathcal {I} \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right| (19a) \\\\ \\end{array}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.16, + 0.723, + 0.49, + 0.74 + ], + "angle": 0, + "content": "\\[\ns. t. \\quad x _ {n - 1} ^ {\\mathrm {p}} + \\Delta x \\leq x _ {n} ^ {\\mathrm {p}} \\leq x _ {n + 1} ^ {\\mathrm {p}} - \\Delta x, \\tag {19b}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.197, + 0.741, + 0.49, + 0.77 + ], + "angle": 0, + "content": "\\[\n\\frac {- L}{2} \\leq x _ {n} ^ {\\mathrm {p}} \\leq \\frac {L}{2}, \\tag {19c}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.776, + 0.49, + 0.805 + ], + "angle": 0, + "content": "Then, the optimal \\( x_{n}^{\\mathrm{p}} \\) can be obtained by the low-complexity one-dimensional search." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.807, + 0.492, + 0.838 + ], + "angle": 0, + "content": "3) Outer layer iteration: In the outer layer, we initialise a large \\(\\varrho\\) and update \\(\\varrho\\) at each outer iteration by" + }, + { + "type": "equation", + "bbox": [ + 0.252, + 0.847, + 0.49, + 0.862 + ], + "angle": 0, + "content": "\\[\n\\varrho = \\varrho \\bar {c} _ {2}, \\tag {20}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.87, + 0.49, + 0.914 + ], + "angle": 0, + "content": "where \\(0 < \\bar{c}_2 < 1\\) is the iteration coefficient of the penalty terms. The penalty-based AO algorithm is summarized in Algorithm 2." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.915, + 0.492, + 0.946 + ], + "angle": 0, + "content": "The proposed penalty-based AO algorithm is summarized in Algorithm 2, which is assured to converge at least to a" + }, + { + "type": "title", + "bbox": [ + 0.507, + 0.067, + 0.796, + 0.082 + ], + "angle": 0, + "content": "Algorithm 2 Penalty-based AO algorithm." + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.084, + 0.921, + 0.114 + ], + "angle": 0, + "content": "1: Parameter Initialization. Set the convergence accuracy \\(\\epsilon_{2}\\) and \\(\\epsilon_{3}\\)." + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.115, + 0.583, + 0.128 + ], + "angle": 0, + "content": "2: repeat" + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.129, + 0.599, + 0.143 + ], + "angle": 0, + "content": "3: repeat" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.144, + 0.872, + 0.159 + ], + "angle": 0, + "content": "4: update \\(\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}\\) by carrying out Algorithm 1." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.16, + 0.872, + 0.175 + ], + "angle": 0, + "content": "5: update \\(\\mathbf{x}_{\\mathrm{p}}\\) via the element-wise optimization." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.175, + 0.921, + 0.204 + ], + "angle": 0, + "content": "6: until the objective value converges with an accuracy of \\(\\epsilon_{2}\\)." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.205, + 0.751, + 0.219 + ], + "angle": 0, + "content": "7: update \\(\\varrho = \\varrho \\bar{c}_2\\) \\((0 < \\bar{c}_2 < 1)\\)" + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.219, + 0.855, + 0.235 + ], + "angle": 0, + "content": "8: until \\(\\| \\tilde{\\mathbf{W}} -\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} -\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}\\leq \\epsilon_{3}\\)" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.084, + 0.921, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.57, + 0.268, + 0.845, + 0.44 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.505, + 0.451, + 0.88, + 0.464 + ], + "angle": 0, + "content": "Fig. 2. The illumination power versus the transmit power at the BS." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.481, + 0.922, + 0.574 + ], + "angle": 0, + "content": "stationary point solution. The computational complexity of Algorithm 2 mainly depends on solving the SDP problems (P6) and the one-dimensional exhaustive search. It is given by \\(\\mathcal{O}\\Big(\\log (\\frac{1}{\\epsilon_3})\\log (\\frac{1}{\\epsilon_2})\\big[\\log (\\frac{1}{\\epsilon_1})N^{3.5} + N\\bar{Q}\\big]\\Big)\\) [10], where \\(\\bar{Q}\\) represents the number of the quantization bits during the one-dimensional exhaustive search." + }, + { + "type": "title", + "bbox": [ + 0.619, + 0.6, + 0.807, + 0.613 + ], + "angle": 0, + "content": "IV. NUMERICAL RESULTS" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.623, + 0.922, + 0.803 + ], + "angle": 0, + "content": "This section evaluates the performance of the proposed PASS-ISAC framework. A 3D topological network setup is considered, where the dielectric waveguide is located in the x-o-z plane with a height of \\(d\\) and a length of \\(50\\mathrm{m}\\). The communicating user and the sensing target are located in a square region centered at the origin in the x-o-y plane. Unless otherwise specified, the default simulation parameters are set as: \\(\\sigma^2 = -105\\) dBm, \\(f = 28\\) GHz, \\(d = 10\\mathrm{m}\\), \\(r_{\\mathrm{s}} = 30\\mathrm{m}\\), \\(\\varphi_{\\mathrm{s}} = \\frac{\\pi}{3}\\), \\(r_{\\mathrm{c}} = 15\\sqrt{2}\\mathrm{m}\\), \\(\\varphi_{\\mathrm{c}} = \\frac{5\\pi}{4}\\), \\(N = 16\\), \\(\\eta_{\\mathrm{eff}} = 1.4\\), \\(R_{\\mathrm{QoS}} = 10\\) bps/Hz, \\(\\epsilon_1 = \\epsilon_2 = \\epsilon_3 = \\epsilon_4 = 10^{-3}\\), and \\(\\alpha_{\\mathrm{s}} = 1\\). The other network parameters are shown in the captions of the figures." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.805, + 0.921, + 0.836 + ], + "angle": 0, + "content": "To validate the performance of the proposed scheme, the following baseline schemes are considered in this paper:" + }, + { + "type": "text", + "bbox": [ + 0.521, + 0.84, + 0.922, + 0.944 + ], + "angle": 0, + "content": "- Conventional antenna: In this scheme, we deploy \\(N\\) conventional uniform linear array (ULA) at the BS as the transmitting antenna with an antenna spacing of \\(\\frac{\\lambda}{2}\\). For fairness comparison, the transmitting antennas are connected to one RF chain and each antenna is associated with an analog phase shifter, which can be varied from 0 to \\(2\\pi\\)." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.14, + 0.08, + 0.414, + 0.25 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.075, + 0.262, + 0.49, + 0.287 + ], + "angle": 0, + "content": "Fig. 3. The illumination power versus the rotation angle of the dielectric waveguide, where \\( P_{\\mathrm{T}} = 70 \\) dBm." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.293, + 0.492, + 0.367 + ], + "angle": 0, + "content": "- Fixed pinching antenna: In this scheme, \\( N \\) pinching antennas are uniformly spread along the dielectric waveguide, where the in-waveguide and free-space channels are determined by the fixed positions of the pinching antennas." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.369, + 0.492, + 0.489 + ], + "angle": 0, + "content": "- Semi-continuous activation: In the semi-continuous activation scheme, we assume there are \\(N\\) pinching antennas uniformly distributed along the dielectric waveguide, which are predetermined and cannot be changed. However, the pinching antennas are allowed to be adjusted in a small-scale range to alter the phase-shift response of the pinching beamforming, which has a negligible impact on the large-scale path loss." + }, + { + "type": "list", + "bbox": [ + 0.092, + 0.293, + 0.492, + 0.489 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.492, + 0.491, + 0.703 + ], + "angle": 0, + "content": "In Fig. 2, we can observe that the pinching antenna achieves the highest illumination power compared to the other baseline schemes. This result can be expected because, compared with the baseline schemes, pinching antennas can be flexibly repositioned to attenuate the large-scale path loss between the pinching antennas and the receiving ends. Thus, more spatial degrees-of-freedom (DoFs) are provided to favor the communication and sensing performance. On the other hand, although the semi-continuous activation scheme cannot reduce the path loss by adjusting the antenna position over a wide range, it exhibits superior performance to the conventional antenna scheme because pinching antennas are spread over the entire communication/sensing area, which averagely closer to the receiving ends." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.704, + 0.492, + 0.914 + ], + "angle": 0, + "content": "Fig. 3 depicts the relationship between the illumination power and the number of activated pinching antennas, with a comparison of the proportional power allocation model. For fairness comparison, \\(\\alpha_{\\mathrm{s}} = 0.9\\) for two power allocation models. As can be observed, the illumination power increases as the number of pinching antennas increases, which is because an increasing number of pinching antennas can improve the beam resolution and reduce the power leakage in irrelevant regions, thereby raising the illumination power at the target. It is also observed that the proportional power allocation is slightly inferior to the equal power allocation model, which verifies the effectiveness of the pinching antennas based on proportional power allocation model in reconfiguring signal propagation." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.915, + 0.492, + 0.947 + ], + "angle": 0, + "content": "Fig. 4 investigates the impact of the rotation angle of the dielectric waveguide on illumination power at the target." + }, + { + "type": "image", + "bbox": [ + 0.57, + 0.08, + 0.847, + 0.252 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.504, + 0.262, + 0.921, + 0.287 + ], + "angle": 0, + "content": "Fig. 4. The illumination power versus the rotation angle of the dielectric waveguide, where \\( P_{\\mathrm{T}} = 70 \\) dBm." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.294, + 0.922, + 0.504 + ], + "angle": 0, + "content": "Here, we assume the dielectric waveguide can be rotated in a clockwise direction parallel to the x-o-y plane, where the rotation angle is defined as the angle entwined by the dielectric waveguide and the x-axis. From Fig. 4, it is shown that the illumination power first increases and then decreases as the rotation angle grows. This is due to the fact that when the rotation angle is \\(60^{\\circ}\\), the target is located underneath the dielectric waveguide, and it receives the maximal illumination power. As the rotation angle further rises, the distance between the target and the pinching antenna becomes large, so the illumination power gradually decreases. In addition, raising the height of the dielectric waveguide increases the average distance from the pinching antennas to the user and target, thus, the illumination power decreases as \\(d\\) increases." + }, + { + "type": "title", + "bbox": [ + 0.652, + 0.526, + 0.774, + 0.54 + ], + "angle": 0, + "content": "V. CONCLUSION" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.547, + 0.922, + 0.682 + ], + "angle": 0, + "content": "A novel PASS-ISAC framework has been proposed, where the pinching beamforming was exploited to realize the simultaneous C&S transmission. A separated ISAC design was proposed for the two-waveguide PASS. A penalty-based AO algorithm was proposed to maximize the illumination power at the target while guaranteeing the QoS requirement of the communication user. Simulation results were provided to verify the superiority of the proposed PASS-ISAC framework over the other baseline schemes." + }, + { + "type": "title", + "bbox": [ + 0.668, + 0.693, + 0.765, + 0.706 + ], + "angle": 0, + "content": "REFERENCES" + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.716, + 0.922, + 0.751 + ], + "angle": 0, + "content": "[1] L. Zhu, W. Ma, and R. Zhang, \"Movable antennas for wireless communication: Opportunities and challenges,\" IEEE Commun. Mag., vol. 62, no. 6, pp. 114-120, Jun. 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.751, + 0.922, + 0.784 + ], + "angle": 0, + "content": "[2] W. K. New, K.-K. Wong et al., \"A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hardware designs,\" IEEE Commun. Surv. Tut., pp. 1-1, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.785, + 0.922, + 0.818 + ], + "angle": 0, + "content": "[3] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.818, + 0.921, + 0.842 + ], + "angle": 0, + "content": "[4] Z. Ding, R. Schober, and H. Vincent Poor, \"Flexible-antenna systems: A pinching-antenna perspective,\" IEEE Trans. Commun., pp. 1-1, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.842, + 0.921, + 0.875 + ], + "angle": 0, + "content": "[5] F. Liu, Y. Cui et al., \"Integrated sensing and communications: Toward dual-functional wireless networks for 6G and beyond,\" IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1728-1767, Jun. 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.875, + 0.921, + 0.91 + ], + "angle": 0, + "content": "[6] Y. Liu, Z. Wang, X. Mu, C. Ouyang, X. Xu, and Z. Ding, “Pinching antenna systems (PASS): Architecture designs, opportunities, and outlook,” arXiv preprint arXiv:2501.18409, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.91, + 0.921, + 0.944 + ], + "angle": 0, + "content": "[7] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.716, + 0.922, + 0.944 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.032, + 0.921, + 0.04 + ], + "angle": 0, + "content": "6" + }, + { + "type": "ref_text", + "bbox": [ + 0.083, + 0.072, + 0.49, + 0.106 + ], + "angle": 0, + "content": "[8] W. Hao, H. Shi et al., \"Joint beamforming design for active RIS-aided THz ISAC systems with delay alignment modulation,\" IEEE Wireless Communications Letters, vol. 12, no. 10, pp. 1816-1820, Oct. 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.083, + 0.107, + 0.491, + 0.141 + ], + "angle": 0, + "content": "[9] T. Jiang and Y. Shi, \"Over-the-air computation via intelligent reflecting surfaces,\" in Proc. IEEE Global Commun. Conf. (GLOBECOM), Waikoloa, HI, USA. Dec. 2019, pp. 1-6." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.141, + 0.49, + 0.175 + ], + "angle": 0, + "content": "[10] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 20-34, May. 2010." + }, + { + "type": "list", + "bbox": [ + 0.08, + 0.072, + 0.491, + 0.175 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_origin.pdf b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3c9e859c2eb835435ac358171517d59821aad873 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/a6a116d9-c584-4299-91c7-a46bfdb58f50_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07dbacc2e48b84f54894fc69edaa1f08b870a1de66dd64dc3b113dad942ba4fc +size 372809 diff --git a/data/2025/2504_07xxx/2504.07709/full.md b/data/2025/2504_07xxx/2504.07709/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3ca90d848f52ff7df43a961b4659d5310a0812bf --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/full.md @@ -0,0 +1,337 @@ +# Integrated Sensing and Communications for Pinching-Antenna Systems (PASS) + +Zheng Zhang, Zhaolin Wang, Xidong Mu Bingtao He, Jian Chen, and Yuanwei Liu + +Abstract—An integrated sensing and communication (ISAC) design for pinching antenna systems (PASS) is proposed, where the pinching antennas are deployed to establish reliable line-of-sight communication and sensing links. More particularly, a separated ISAC design is proposed for the two-waveguide PASS, where one waveguide is used to emit the information-bearing signals for ISAC transmission while the other waveguide is used to receive the reflected echo signals. Based on this framework, a penalty-based alternating optimization algorithm is proposed to maximize the illumination power as well as ensure the communication quality-of-service requirement. Numerical results demonstrate that the proposed PASS-ISAC scheme outperforms the conventional antenna scheme. + +Index Terms—Beamforming design, integrated sensing and communication, pinching antenna systems. + +# I. INTRODUCTION + +Fuelled by the burgeoning demands for massive data transmission and pervasive network coverage, flexible antennas have emerged as a promising technique for sixth-generation (6G) cellular systems. Benefiting from their ability to reconfigure the wireless channel, flexible antennas can significantly enhance the throughput of wireless networks. However, traditional flexible antennas (e.g., movable antennas [1] and fluid antennas [2]) merely permit the adjustment of the antenna position within a range of orders of magnitude comparable to the carrier wavelength. Against this backdrop, the pinching antenna has emerged [3], which is a type of dielectric waveguide-based leaky wave antenna. By applying dielectric particles to a particular point on the dielectric waveguide, a pinching antenna can be activated to establish EM radiation fields and form a communication area [4]. Then, the EM signal inside the dielectric waveguide will be radiated from the pinching antenna to free space with a defined phase shift adjustment (referred to as the pinching beamformer). Notably, as the dielectric waveguide can be pinched at any position to radiate radio waves, the pinching antenna can flexibly move along the dielectric waveguide over a length of dozens of meters, thereby relocating to the closest position to the receiver and creating reliable LoS links. + +To enable emerging applications, such as autonomous driving, extended reality, and the Metaverse, sensing functionality is recognized as an important indicator of future networks. + +Zheng Zhang, Bingtao He, and Jian Chen are with the School of Telecommunications Engineering, Xidian University, Xi'an 710071, China (e-mail: zhang_688@stu.xidian.edu.cn; bthe@xidian.edu.cn; jianchen@mail.xidian.edu.cn). +Zhaolin Wang is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: zhaolin.wang@qmul.ac.uk). +Xidong Mu is with Queen's University Belfast, Belfast, BT3 9DT, U.K. (email: x.mu@qub.ac.uk) +Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk). + +![](images/0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg) +Fig. 1. The separated ISAC design for PASS. + +In pursuit of this vision, the integrated sensing and communication (ISAC) technology has drawn significant attention recently [5], which aims to leverage the cellular network hardware platforms and dedicated signal processing algorithms to achieve the incorporation of communication and sensing functionalities. Recently, it has been claimed that conducting ISAC transmission in the pinching antenna systems (PASS) can further upgrade the communication and sensing (C&S) performance of the network [6]. On the one hand, the pinching antenna can be flexibly repositioned to augment the echo signal energy. On the other hand, the wide-range mobility characteristic of pinching antennas results in an antenna aperture spanning dozens of meters. It inherently enables nearfield sensing, e.g., the possibility of simultaneous angular and distance information estimation and even target velocity sensing, thereby offering a more comprehensive and accurate sensing of the surrounding environment. Nevertheless, as of the present moment, research in the PASS-ISAC remains conspicuously absent. + +Motivated by the above, this paper proposes a separated ISAC design for PASS. To elaborate, the base station (BS) is connected with two dielectric waveguides, where one waveguide is used to transmit the downlink signals, while the other is employed to collect the reflected echo signals from the target. We aim to maximize the illumination power at the target while satisfying the quality-of-service (QoS) requirement of the communication user by optimizing the pinching beamforming offered by the mobility of pinching antennas. A penalty-based alternating optimization (AO) algorithm is proposed to handle the non-convex optimization problem, where the positions of pinching antennas are updated in an element-wise manner. Numerical results evaluate the superiority of the proposed scheme over the baseline schemes. + +# II. SYSTEM MODEL AND PROBLEM FORMULATION + +As shown in Fig. 1, we consider a PASS-ISAC system, where a dual-function BS conveys with a single-antenna communication user while sensing a point-like target. The + +BS is connected with two dielectric waveguides of length $L$ , each of which consists of $N$ pinching antennas. To achieve the simultaneous C&S transmission, a separated ISAC design is proposed. Specifically, the downlink information-bearing signals are emitted from one waveguide (referred to as transmitting antennas). Then, the reflected echoes from the target would be collected at the other waveguide (referred to as receiving antennas), which are further transmitted to the BS for parameter estimation. + +A three-dimensional (3D) coordination system is considered, where two dielectric waveguides extended from the BS are assumed to be parallel to the x-axis with respect to the x-o-y plane at a height $d$ . The position of the $n$ -th pinching antenna distributed along the transmitting and receiving dielectric waveguides can be denoted as $\psi_{n}^{\mathrm{p}} = (x_{n}^{\mathrm{p}},0,d)$ and $\psi_{n}^{\mathrm{q}} = (x_{n}^{\mathrm{q}},y^{\mathrm{q}},d)$ . The communication user and sensing target are located in the x-o-y plane. Let $r_{\mathrm{c}}$ and $\varphi_{\mathrm{c}}$ denote the distance and the azimuth angle of the communication user relative to the origin of the coordinate system. Thus, the coordinates of communication user is given by $\psi^{\mathrm{c}} = (r_{\mathrm{c}}\cos \varphi_{\mathrm{c}},r_{\mathrm{c}}\sin \varphi_{\mathrm{c}},0)$ . Similarly, the target is located in $\psi^{\mathrm{s}} = (r_{\mathrm{s}}\cos \varphi_{\mathrm{s}},r_{\mathrm{s}}\sin \varphi_{\mathrm{s}},0)$ . Furthermore, we assume the target is a static node or moves at a low speed. Thus, the Doppler effect is neglected in this work. + +# A. Channel Model + +In the considered network, the pinching antennas are non-uniformly disposed on the dielectric waveguide covering the entire range of the user's activity, which implies that the aperture of the pinching antennas may have the same order of magnitude as the signal transmission distance. Without loss of accuracy, we adopt the spherical-wave-based nearfield channel model, where only the LoS path is considered. Consequently, the distance from the $n$ -th pinching antenna to the target is given by + +$$ +\begin{array}{l} r _ {n} ^ {\zeta} \left(r _ {\zeta}, \varphi_ {\zeta}\right) = \left\| \psi^ {\zeta} - \psi_ {n} ^ {\mathrm {p}} \right\| \\ = \sqrt {r _ {\zeta} ^ {2} - 2 r _ {\zeta} \cos \varphi_ {\zeta} x _ {n} ^ {\mathrm {p}} + \left(x _ {n} ^ {\mathrm {p}}\right) ^ {2} + d ^ {2}}, \quad \zeta \in \{\mathrm {s}, \mathrm {c} \}, \tag {1} \\ \end{array} +$$ + +Thus, the free space channel vector from the transmitting antennas to the target and the communication user can be expressed as + +$$ +\mathbf {h} _ {\mathrm {s}} \left(\mathbf {x} ^ {\mathrm {p}}\right) = \left[ \frac {\eta^ {\frac {1}{2}} e ^ {- \mathcal {I} \frac {2 \pi}{\lambda} r _ {1} ^ {\mathrm {s}} \left(r , \varphi_ {\mathrm {s}}\right)}}{r _ {1} ^ {\mathrm {s}} \left(r _ {\mathrm {s}}, \varphi_ {\mathrm {s}}\right)}, \dots , \frac {\eta^ {\frac {1}{2}} e ^ {- \mathcal {I} \frac {2 \pi}{\lambda} r _ {N} ^ {\mathrm {s}} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right)}}{r _ {N} ^ {\mathrm {s}} \left(r _ {\mathrm {s}}, \varphi_ {\mathrm {s}}\right)} \right] ^ {H}, \tag {2} +$$ + +$$ +\mathbf {h} _ {\mathrm {c}} \left(\mathbf {x} ^ {\mathrm {p}}\right) = \left[ \frac {\eta^ {\frac {1}{2}} e ^ {- j \frac {2 \pi}{\lambda} r _ {1} ^ {\mathrm {c}} (r , \varphi_ {\mathrm {c}})}}{r _ {1} ^ {\mathrm {c}} (r , \varphi_ {\mathrm {c}})}, \dots , \frac {\eta^ {\frac {1}{2}} e ^ {- j \frac {2 \pi}{\lambda} r _ {N} ^ {\mathrm {c}} (r , \varphi_ {\mathrm {c}})}}{r _ {N} ^ {\mathrm {c}} (r , \varphi_ {\mathrm {c}})} \right] ^ {H}, \tag {3} +$$ + +where $\mathbf{x}^{\mathrm{p}} = [x_1^{\mathrm{p}},\dots ,x_N^{\mathrm{p}}]$ denotes the coordinates of pinching antennas, $\lambda = \frac{c}{f_{\mathrm{c}}}$ denotes the wavelength, $f_{\mathrm{c}}$ is the frequency of the carrier wave, $\eta = \frac{c^2}{16\pi^2f_c^2}$ , and $c$ denotes the speed of light. + +In this paper, the BS aims to utilize the communication signal to achieve simultaneous communication and target sensing. Consider a coherent time block of length $T$ , the + +communication channel condition and the sensing parameters are assumed to remain unchanged during one coherent time block. Thus, the emitted signal at the $t$ -th time slot is given by $s(t) \in \mathbb{C}$ , which is assumed to be normalized and independently distributed, i.e., $\mathbb{E}\{|s(t)|^2\} = 1$ and $\mathbb{E}\{s(t)s^*(\bar{t})\} = 0$ . On receiving $s(t)$ , the dielectric waveguide radiates the signal $\mathbf{x}(t) = \sqrt{P_{\mathrm{T}}} \mathbf{g}(\mathbf{x}^{\mathrm{p}}) s(t)$ , where $\mathbf{g}(\mathbf{x}^{\mathrm{p}})$ denotes the in-waveguide channel and can be expressed as + +$$ +\mathbf {g} \left(\mathbf {x} ^ {\mathrm {p}}\right) = \left[ \sqrt {\alpha_ {1}} e ^ {- \jmath \theta_ {1}}, \dots , \sqrt {\alpha_ {N}} e ^ {- \jmath \theta_ {N}} \right] ^ {T}, \tag {4} +$$ + +where $\theta_{n}$ denotes the radiation phase shift at the $n$ -th pinching antenna, and $P_{\mathrm{T}}$ denotes the transmit power at the BS. $\alpha_{n}$ denotes the power allocation coefficients at the $n$ -th pinching antenna, which can be modeled as the equal power allocation model $\sqrt{\alpha_n} = \sqrt{\frac{\alpha_s}{N}}$ [4] or the proportional power allocation model $\sqrt{\alpha_n} = \delta (\sqrt{1 - \delta^2})^{n - 1}$ [7]. $\delta = \sqrt{1 - (1 - \alpha_s)^{\frac{1}{N}}}$ represents the proportional coefficient, and $\alpha_{s} = \sum_{n = 1}^{N}\alpha_{n}$ denotes the radiation coefficient of pinching antennas. For ease of implementation, the equal power allocation model is considered in this paper. $\theta_{n}$ is defined by $2\pi \eta_{\mathrm{eff}}\frac{\|\psi_0^{\mathrm{p}} - \psi_n^{\mathrm{p}}\|}{\lambda}$ , where $\psi_0^{\mathrm{p}}$ denotes the location of the feed point, and $\eta_{\mathrm{eff}}$ denotes the effective refractive index of the dielectric waveguide. + +# B. Signal Model + +With the above channel model, it is readily observed that the positions of pinching antennas have a significant impact on both the free space channel $\{\mathbf{h}_{\mathrm{s}}(\mathbf{x}^{\mathrm{p}}), \mathbf{h}_{\mathrm{c}}(\mathbf{x}^{\mathrm{p}})\}$ and the in-waveguide channel $\mathbf{g}(\mathbf{x}^{\mathrm{p}})$ . As a result, it becomes possible to establish favorable wireless propagation while manipulating the radiated characteristics of signals by altering the positions of pinching antennas in the PASS. To characterize the two aspects of the signal reconfiguration capabilities of pinching antennas, we refer to it as pinching beamforming in this paper. Let $\mathbf{w}(\mathbf{x}^{\mathrm{p}})$ and $\mathbf{v}(\mathbf{x}^{\mathrm{p}})$ denote the pinching beamforming for the communication user and the sensing target, which are also the functions of $\mathbf{x}^{\mathrm{p}}$ . $\mathbf{w}(\mathbf{x}^{\mathrm{p}})$ and $\mathbf{v}(\mathbf{x}^{\mathrm{p}})$ are given by + +$$ +\mathbf {w} \left(\mathbf {x} ^ {\mathrm {p}}\right) = \left[ \frac {e ^ {- j \left(\frac {2 \pi}{\lambda} r _ {1} ^ {\mathrm {c}} \left(r _ {\mathrm {c}} , \varphi_ {\mathrm {c}}\right) + \theta_ {1}\right)}}{\frac {1}{\sqrt {\alpha_ {1}}} r _ {1} ^ {\mathrm {c}} \left(r _ {\mathrm {c}} , \varphi_ {\mathrm {c}}\right)}, \dots , \frac {e ^ {- j \left(\frac {2 \pi}{\lambda} r _ {N} ^ {\mathrm {c}} \left(r _ {\mathrm {c}} , \varphi_ {\mathrm {c}}\right) + \theta_ {N}\right)}}{\frac {1}{\sqrt {\alpha_ {N}}} r _ {N} ^ {\mathrm {c}} \left(r _ {\mathrm {c}} , \varphi_ {\mathrm {c}}\right)} \right] ^ {T}, \tag {5} +$$ + +$$ +\mathbf {v} \left(\mathbf {x} ^ {\mathrm {p}}\right) = \left[ \frac {e ^ {- \jmath \left(\frac {2 \pi}{\lambda} r _ {1} ^ {\mathrm {s}} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right) + \theta_ {1}\right)}}{\frac {1}{\sqrt {\alpha_ {1}}} r _ {1} ^ {\mathrm {s}} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right)}, \dots , \frac {e ^ {- \jmath \left(\frac {2 \pi}{\lambda} r _ {N} ^ {\mathrm {s}} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right) + \theta_ {N}\right)}}{\frac {1}{\sqrt {\alpha_ {N}}} r _ {N} ^ {\mathrm {s}} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right)} \right] ^ {T}. \tag {6} +$$ + +In this paper, we consider an ideal activation model of the pinching antenna, i.e., continuous activation. It indicates that the pinching antennas can be activated at any position of the dielectric waveguide. Thus, the positions of pinching antennas satisfy + +$$ +\mathbf {x} ^ {\mathrm {p}} \in \mathcal {X} = \left\{\left| x _ {n} ^ {\mathrm {p}} - x _ {m} ^ {\mathrm {p}} \right| \geq \Delta x (n \neq m), x _ {n} ^ {\mathrm {p}} \in \left[ - \frac {L}{2}, \frac {L}{2} \right] \right\}, \tag {7} +$$ + +where $\Delta x$ represents the minimum antenna space between two adjacent pinching antennas. + +1) Communication Performance Metric: With the aforementioned signal model, the received signals at the communication user are given by + +$$ +\begin{array}{l} y (t) = \sqrt {P _ {\mathrm {T}}} \mathbf {h} _ {\mathrm {c}} ^ {H} (\mathbf {x} ^ {\mathrm {p}}) \mathbf {g} (\mathbf {x} ^ {\mathrm {p}}) s (t) + n (t) \\ = \sqrt {P _ {\mathrm {T}}} \boldsymbol {\eta} ^ {H} \mathbf {w} \left(\mathbf {x} ^ {\mathrm {p}}\right) s (t) + n (t), \tag {8} \\ \end{array} +$$ + +where $\pmb {\eta} = [\eta^{\frac{1}{2}},\dots ,\eta^{\frac{1}{2}}]^{T}\in \mathbb{C}^{N\times 1}$ is a constant vector, and $n(t)\sim \mathcal{CN}(0,\sigma^2)$ denotes the additive white Gaussian noise (AWGN) at the communication user. Hence, the achievable rate of the communication user is given by + +$$ +R = \log_ {2} \left(1 + \frac {P _ {\mathrm {T}} \left| \boldsymbol {\eta} ^ {H} \mathbf {w} \left(\mathbf {x} ^ {\mathrm {p}}\right) \right| ^ {2}}{\sigma^ {2}}\right). \tag {9} +$$ + +2) Sensing Performance Metric: For target sensing, we adopt the illumination power as the performance metric, which characterizes the received sensing signal power at the target [8]. Thus, the illumination power with respect to azimuth angle $\varphi_{\mathrm{s}}$ and distance $r_{\mathrm{s}}$ is given by + +$$ +\begin{array}{l} P _ {\mathrm {s}} = \mathbb {E} \left\{\left| \sqrt {P _ {\mathrm {T}}} \mathbf {h} _ {\mathrm {s}} ^ {H} (\mathbf {x} ^ {\mathrm {p}}) \mathbf {g} (\mathbf {x} ^ {\mathrm {p}}) s (t) \right| ^ {2} \right\} \\ = P _ {\mathrm {T}} \boldsymbol {\eta} ^ {H} \mathbf {v} \left(\mathbf {x} ^ {\mathrm {p}}\right) \mathbf {v} ^ {H} \left(\mathbf {x} ^ {\mathrm {p}}\right) \boldsymbol {\eta}. \tag {10} \\ \end{array} +$$ + +# C. Problem Formulation + +In this paper, we aim to maximize the illumination power $P(\theta_{\mathrm{s}}, r_{\mathrm{s}})$ by designing the pinching beamformer, under the transmit power budget and communication QoS requirement, which is given by + +$$ +\left(\mathrm {P} 1\right) \quad \max _ {\mathbf {x} ^ {\mathrm {p}}} P _ {\mathrm {s}} \tag {11a} +$$ + +$$ +\text {s . t .} \quad R \geq R _ {\mathrm {Q o S}}, \tag {11b} +$$ + +$$ +\mathbf {x} ^ {\mathrm {p}} \in \mathcal {X}, \tag {11c} +$$ + +where $R_{\mathrm{QoS}}$ denotes the QoS requirement of the communication user. The problem (P1) is challenging to solve due to the quadratic objective function and the coupled variables. + +# III. PINCHING BEAMFORMING OPTIMIZATION + +In this section, we focus on the C&S transmission design by optimizing the pinching beamforming. To deal with the coupled optimization variables, a penalty-based AO algorithm is proposed, where $\{\mathbf{x}^{\mathrm{p}}\}$ is optimized in an element-wise manner. + +To facilitate the optimization, we can rewrite the problem (P1) as + +$$ +\left(\mathrm {P} 2\right) \max _ {\mathbf {x} ^ {\mathrm {p}}} | \boldsymbol {\eta} ^ {H} \mathbf {v} \left(\mathbf {x} ^ {\mathrm {p}}\right) | ^ {2} \tag {12a} +$$ + +$$ +\text {s . t .} \quad | \boldsymbol {\eta} ^ {H} \mathbf {w} \left(\mathbf {x} ^ {\mathrm {p}}\right) | ^ {2} \geq \gamma_ {\mathrm {Q o S}} \sigma^ {2}, \tag {12b} +$$ + +$$ +(1 1 c), \tag {12c} +$$ + +where $\gamma_{\mathrm{QoS}} = \frac{2^{R_{\mathrm{QoS}} - 1}}{P_{\mathrm{T}}}$ + +In order to deal with the intractable objective and constraints, we consider a penalty-based two-layer framework. To elaborate, we introduce auxiliary variables $\tilde{\mathbf{w}}$ and $\tilde{\mathbf{v}}$ to replace $\mathbf{w}(\mathbf{x}^{\mathrm{p}})$ and $\mathbf{v}(\mathbf{x}^{\mathrm{p}})$ , respectively. Thus, we have the equality constraints $\tilde{\mathbf{w}} = \mathbf{w}(\mathbf{x}^{\mathrm{p}})$ and $\tilde{\mathbf{v}} = \mathbf{v}(\mathbf{x}^{\mathrm{p}})$ . By relocating the equality constraint to the objective function and serving as a + +penalty term, the problem (P2) can be equivalently rewritten as + +$$ +\left(\mathrm {P} 3\right) \max _ {\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {w}}, \tilde {\mathbf {v}}} | \boldsymbol {\eta} ^ {H} \tilde {\mathbf {v}} | ^ {2} - \frac {1}{2 \varrho} \chi_ {1} \left(\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {w}}, \tilde {\mathbf {v}}\right) \tag {13a} +$$ + +$$ +\text {s . t .} \quad | \boldsymbol {\eta} ^ {H} \tilde {\mathbf {w}} | ^ {2} \geq \gamma_ {\mathrm {Q o S}} \sigma^ {2}, \tag {13b} +$$ + +$$ +\left| \tilde {\mathbf {w}} _ {[ n ]} \right| ^ {2} \leq \frac {1}{N r _ {\operatorname* {m i n} , \mathrm {c}} ^ {2}}, \tag {13c} +$$ + +$$ +\left| \tilde {\mathbf {v}} _ {[ n ]} \right| ^ {2} \leq \frac {1}{N r _ {\operatorname* {m i n} , \mathrm {s}} ^ {2}}, \tag {13d} +$$ + +$$ +(1 1 \mathrm {c}), \tag {13e} +$$ + +where $\chi_{1}(\mathbf{x}^{\mathrm{p}},\tilde{\mathbf{w}},\tilde{\mathbf{v}}) = \| \tilde{\mathbf{w}} -\mathbf{w}(\mathbf{x}^{\mathrm{p}})\| +\| \tilde{\mathbf{v}} -\mathbf{v}(\mathbf{x}^{\mathrm{p}})\|$ and $\varrho$ denotes the scaling factor of the penalty terms. Note that to avoid the infinite objective value, we introduce constraints (13c) and (13d), where $r_{\min ,\mathrm{c}} = \sqrt{(r_{\mathrm{c}}\sin\varphi_{\mathrm{c}})^2 + d^2}$ and $r_{\min ,\mathrm{s}} = \sqrt{(r_{\mathrm{s}}\sin\varphi_{\mathrm{s}})^2 + d^2}$ denote the lower bounds of the distances from an arbitrary pinching antenna to the communication user and target. The problem (P3) is equivalent to the problem (P1) as constraints (13c) and (13d) can be obtained from the (11c), which restricts pinching beamforming $\{\tilde{\mathbf{w}},\tilde{\mathbf{v}}\}$ to the feasible region. + +To address the quadratic objective and constraints, we apply the SDR technique to rewrite the problem (P3) as follows. + +$$ +\max _ {\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {W}}, \tilde {\mathbf {V}}} \operatorname {T r} \left(\boldsymbol {\eta} \boldsymbol {\eta} ^ {H} \tilde {\mathbf {V}}\right) - \frac {1}{2 \varrho} \chi_ {2} \left(\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {W}}, \tilde {\mathbf {V}}\right) \tag {14a} +$$ + +$$ +\text {s . t .} \quad \tilde {\mathbf {W}} _ {[ n, n ]} \leq \frac {1}{N r _ {\min , c} ^ {2}}, \tag {14b} +$$ + +$$ +\tilde {\mathbf {V}} _ {[ n, n ]} \leq \frac {1}{N r _ {\min , s} ^ {2}}, \tag {14c} +$$ + +$$ +\operatorname {T r} \left(\boldsymbol {\eta} \boldsymbol {\eta} ^ {H} \tilde {\mathbf {W}}\right) \geq \gamma_ {\mathrm {Q o S}} \sigma^ {2}, \tag {14d} +$$ + +$$ +\operatorname {r a n k} (\tilde {\mathbf {W}}) = 1, \operatorname {r a n k} (\tilde {\mathbf {V}}) = 1, \tag {14e} +$$ + +$$ +\tilde {\mathbf {W}} \succeq \mathbf {0}, \tilde {\mathbf {V}} \succeq \mathbf {0}, \tag {14f} +$$ + +$$ +(1 1 \mathrm {c}), \tag {14g} +$$ + +where $\mathbf{W}(\mathbf{x}^{\mathrm{p}}) = \mathbf{w}(\mathbf{x}^{\mathrm{p}})\mathbf{w}^{H}(\mathbf{x}^{\mathrm{p}})$ , $\tilde{\mathbf{W}} = \tilde{\mathbf{w}}\tilde{\mathbf{w}}^{H}$ , $\mathbf{V}(\mathbf{x}^{\mathrm{p}}) = \mathbf{v}(\mathbf{x}^{\mathrm{p}})\mathbf{v}^{H}(\mathbf{x}^{\mathrm{p}})$ , $\tilde{\mathbf{V}} = \tilde{\mathbf{v}}\tilde{\mathbf{v}}^{H}$ , and $\chi_{2}(\mathbf{x}^{\mathrm{p}},\tilde{\mathbf{W}},\tilde{\mathbf{V}}) = \| \tilde{\mathbf{W}} - \mathbf{W}(\mathbf{x}^{\mathrm{p}})\|_{F} + \| \tilde{\mathbf{V}} - \mathbf{V}(\mathbf{x}^{\mathrm{p}})\|_{F}$ . To solve the problem (P4), we propose a penalty-based AO algorithm, which alternately optimizes $\{\tilde{\mathbf{W}},\tilde{\mathbf{V}}\}$ and $\{\mathbf{x}^{\mathrm{p}}\}$ in the inner layer and updates $\varrho$ in the outer layer. + +1) Inner layer iteration—subproblem with respect to $\{\tilde{\mathbf{W}},\tilde{\mathbf{V}}\}$ : With the fixed $\{\mathbf{x}^{\mathrm{p}}\}$ , the problem (P4) is reduced to + +$$ +\left(\mathrm {P} 5\right) \max _ {\tilde {\mathbf {W}}, \tilde {\mathbf {V}}} \operatorname {T r} \left(\boldsymbol {\eta} \boldsymbol {\eta} ^ {H} \tilde {\mathbf {V}}\right) - \frac {1}{2 \varrho} \chi_ {2} \left(\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {W}}, \tilde {\mathbf {V}}\right) \tag {15a} +$$ + +$$ +\text {s . t .} \quad (1 4 b) - (1 4 f). \tag {15b} +$$ + +To handle the rank-one constraint, we introduce non-negative auxiliary variables $\{\varpi_1,\varpi_2\}$ and employ the difference-of-convex (DC) relaxation method [9] to rewrite the (14c) as + +$$ +\left\{ \begin{array}{l} \Re (\operatorname {T r} (\tilde {\mathbf {W}} ^ {H} (\mathbf {I} - \tilde {\mathbf {w}} _ {\max } \tilde {\mathbf {w}} _ {\max } ^ {H}))) \leq \varpi_ {1}, \\ \Re (\operatorname {T r} (\tilde {\mathbf {V}} ^ {H} (\mathbf {I} - \tilde {\mathbf {v}} _ {\max } \tilde {\mathbf {v}} _ {\max } ^ {H}))) \leq \varpi_ {2}, \end{array} \quad i \in \{1, 2 \}, \right. \tag {16} +$$ + +# Algorithm 1 Iterative algorithm for rank-one solution. + +1: Initialize $\tilde{\mathbf{v}}_{\mathrm{max}}$ and $\tilde{\mathbf{w}}_{\mathrm{max}}$ . Set a convergence accuracy $\epsilon_{1}$ . + +# 2: repeat + +3: update $\{\tilde{\mathbf{W}},\tilde{\mathbf{V}},\varpi_i\}$ by solving the problem (P6). +4: update the eigenvectors $\{\tilde{\mathbf{w}}_{\mathrm{max}},\tilde{\mathbf{v}}_{\mathrm{max}}\}$ +5: update $\varrho_{i} = \varrho_{i}\bar{c}_{1}$ $(0 < \bar{c}_1 < 1)$ +6: until $\sum_{i=1}^{2} \varpi_i$ falls below a threshold of $\epsilon_1$ . + +where $\tilde{\mathbf{w}}_{\mathrm{max}}$ and $\tilde{\mathbf{v}}_{\mathrm{max}}$ represent the eigenvectors corresponding to the maximum eigenvalues of $\tilde{\mathbf{W}}$ and $\tilde{\mathbf{V}}$ , respectively. As a result, the problem (P5) can be transformed into + +$$ +\left. \max _ {\tilde {\mathbf {W}}, \tilde {\mathbf {V}}, \varpi_ {i}} \operatorname {T r} \left(\boldsymbol {\eta} \boldsymbol {\eta} ^ {H} \tilde {\mathbf {V}}\right) - \frac {1}{2 \varrho} \chi_ {2} \left(\mathbf {x} ^ {\mathrm {p}}, \tilde {\mathbf {W}}, \tilde {\mathbf {V}}\right) - \sum_ {i = 1} ^ {2} \frac {1}{2 \varrho_ {i}} \varpi_ {i} \right. \tag {17a} +$$ + +$$ +s. t. \quad \varpi_ {i} \geq 0, i \in \{1, 2 \}, \tag {17b} +$$ + +$$ +(1 4 b) - (1 4 f), (1 6), \tag {17c} +$$ + +where $\varrho_{i}$ denotes the scaling factor of $\varpi_{i}$ . The problem (P6) is a convex problem and can be directly solved. Thus, the rank-one solution $\{\tilde{\mathbf{W}},\tilde{\mathbf{V}}\}$ can be obtained by carrying out the Algorithm 1. + +2) Inner layer iteration—subproblem with respect to $\{\mathbf{x}^p\}$ : Note that the equality constraint $\tilde{\mathbf{W}} = \mathbf{W}(\mathbf{x}^{\mathrm{p}})$ and $\tilde{\mathbf{V}} = \mathbf{V}(\mathbf{x}^{\mathrm{p}})$ are equivalent to $\tilde{\mathbf{w}} = \mathbf{w}(\mathbf{x}^{\mathrm{p}})$ and $\tilde{\mathbf{v}} = \mathbf{v}(\mathbf{x}^{\mathrm{p}})$ . As a result, the problem (P6) can be transformed into + +$$ +\left(\mathrm {P} 7\right) \min _ {\mathbf {x} ^ {\mathrm {p}}} \| \tilde {\mathbf {w}} - \mathbf {w} \left(\mathbf {x} ^ {\mathrm {p}}\right) \| + \| \tilde {\mathbf {v}} - \mathbf {v} \left(\mathbf {x} ^ {\mathrm {p}}\right) \| \tag {18a} +$$ + +$$ +\begin{array}{l} \text {s . t .} \quad (1 1 c). \end{array} \tag {18b} +$$ + +It is easy to notice that $x_{n}^{\mathrm{p}}$ and $x_{m}^{\mathrm{p}}$ ( $n \neq m$ ) are separated in the objective function but coupled in the constraint (11c), which motivates us to adopt the elementwise optimization framework. Therefore, with the fixed $\{x_{1}^{\mathrm{p}}, \dots, x_{n-1}^{\mathrm{p}}, x_{n+1}^{\mathrm{p}}, \dots, x_{N}^{\mathrm{p}}\}$ , the subproblem with respect to $x_{n}^{\mathrm{p}}$ is given by + +$$ +\begin{array}{l} \min _ {x _ {n} ^ {\mathrm {p}}} \left| \tilde {\mathbf {w}} _ {[ n ]} - \frac {e ^ {- J \left(\frac {2 \pi}{\lambda} r _ {n} ^ {\mathrm {c}} \left(r _ {\mathrm {c}} , \varphi_ {\mathrm {c}}\right) + \theta_ {n}\right)}}{\sqrt {N} r _ {n} ^ {\mathrm {c}} \left(r _ {\mathrm {c}}, \varphi_ {\mathrm {c}}\right)} \right| (P8) \\ + \left| \tilde {\mathbf {v}} _ {[ n ]} - \frac {e ^ {- \mathcal {I} \left(\frac {2 \pi}{\lambda} r _ {n} ^ {s} \left(r _ {\mathrm {s}} , \varphi_ {\mathrm {s}}\right) + \theta_ {n}\right)}}{\sqrt {N} r _ {n} ^ {s} \left(r _ {\mathrm {s}}, \varphi_ {\mathrm {s}}\right)} \right| (19a) \\ \end{array} +$$ + +$$ +s. t. \quad x _ {n - 1} ^ {\mathrm {p}} + \Delta x \leq x _ {n} ^ {\mathrm {p}} \leq x _ {n + 1} ^ {\mathrm {p}} - \Delta x, \tag {19b} +$$ + +$$ +\frac {- L}{2} \leq x _ {n} ^ {\mathrm {p}} \leq \frac {L}{2}, \tag {19c} +$$ + +Then, the optimal $x_{n}^{\mathrm{p}}$ can be obtained by the low-complexity one-dimensional search. + +3) Outer layer iteration: In the outer layer, we initialise a large $\varrho$ and update $\varrho$ at each outer iteration by + +$$ +\varrho = \varrho \bar {c} _ {2}, \tag {20} +$$ + +where $0 < \bar{c}_2 < 1$ is the iteration coefficient of the penalty terms. The penalty-based AO algorithm is summarized in Algorithm 2. + +The proposed penalty-based AO algorithm is summarized in Algorithm 2, which is assured to converge at least to a + +# Algorithm 2 Penalty-based AO algorithm. + +# 2: repeat + +# 3: repeat + +1: Parameter Initialization. Set the convergence accuracy $\epsilon_{2}$ and $\epsilon_{3}$ . +4: update $\{\tilde{\mathbf{w}},\tilde{\mathbf{v}}\}$ by carrying out Algorithm 1. +5: update $\mathbf{x}_{\mathrm{p}}$ via the element-wise optimization. +6: until the objective value converges with an accuracy of $\epsilon_{2}$ . +7: update $\varrho = \varrho \bar{c}_2$ $(0 < \bar{c}_2 < 1)$ +8: until $\| \tilde{\mathbf{W}} -\mathbf{W}(\mathbf{x}^{\mathrm{p}})\|_{F} + \| \tilde{\mathbf{V}} -\mathbf{V}(\mathbf{x}^{\mathrm{p}})\|_{F}\leq \epsilon_{3}$ + +![](images/61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg) +Fig. 2. The illumination power versus the transmit power at the BS. + +stationary point solution. The computational complexity of Algorithm 2 mainly depends on solving the SDP problems (P6) and the one-dimensional exhaustive search. It is given by $\mathcal{O}\Big(\log (\frac{1}{\epsilon_3})\log (\frac{1}{\epsilon_2})\big[\log (\frac{1}{\epsilon_1})N^{3.5} + N\bar{Q}\big]\Big)$ [10], where $\bar{Q}$ represents the number of the quantization bits during the one-dimensional exhaustive search. + +# IV. NUMERICAL RESULTS + +This section evaluates the performance of the proposed PASS-ISAC framework. A 3D topological network setup is considered, where the dielectric waveguide is located in the x-o-z plane with a height of $d$ and a length of $50\mathrm{m}$ . The communicating user and the sensing target are located in a square region centered at the origin in the x-o-y plane. Unless otherwise specified, the default simulation parameters are set as: $\sigma^2 = -105$ dBm, $f = 28$ GHz, $d = 10\mathrm{m}$ , $r_{\mathrm{s}} = 30\mathrm{m}$ , $\varphi_{\mathrm{s}} = \frac{\pi}{3}$ , $r_{\mathrm{c}} = 15\sqrt{2}\mathrm{m}$ , $\varphi_{\mathrm{c}} = \frac{5\pi}{4}$ , $N = 16$ , $\eta_{\mathrm{eff}} = 1.4$ , $R_{\mathrm{QoS}} = 10$ bps/Hz, $\epsilon_1 = \epsilon_2 = \epsilon_3 = \epsilon_4 = 10^{-3}$ , and $\alpha_{\mathrm{s}} = 1$ . The other network parameters are shown in the captions of the figures. + +To validate the performance of the proposed scheme, the following baseline schemes are considered in this paper: + +- Conventional antenna: In this scheme, we deploy $N$ conventional uniform linear array (ULA) at the BS as the transmitting antenna with an antenna spacing of $\frac{\lambda}{2}$ . For fairness comparison, the transmitting antennas are connected to one RF chain and each antenna is associated with an analog phase shifter, which can be varied from 0 to $2\pi$ . + +![](images/4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg) +Fig. 3. The illumination power versus the rotation angle of the dielectric waveguide, where $P_{\mathrm{T}} = 70$ dBm. + +- Fixed pinching antenna: In this scheme, $N$ pinching antennas are uniformly spread along the dielectric waveguide, where the in-waveguide and free-space channels are determined by the fixed positions of the pinching antennas. +- Semi-continuous activation: In the semi-continuous activation scheme, we assume there are $N$ pinching antennas uniformly distributed along the dielectric waveguide, which are predetermined and cannot be changed. However, the pinching antennas are allowed to be adjusted in a small-scale range to alter the phase-shift response of the pinching beamforming, which has a negligible impact on the large-scale path loss. + +In Fig. 2, we can observe that the pinching antenna achieves the highest illumination power compared to the other baseline schemes. This result can be expected because, compared with the baseline schemes, pinching antennas can be flexibly repositioned to attenuate the large-scale path loss between the pinching antennas and the receiving ends. Thus, more spatial degrees-of-freedom (DoFs) are provided to favor the communication and sensing performance. On the other hand, although the semi-continuous activation scheme cannot reduce the path loss by adjusting the antenna position over a wide range, it exhibits superior performance to the conventional antenna scheme because pinching antennas are spread over the entire communication/sensing area, which averagely closer to the receiving ends. + +Fig. 3 depicts the relationship between the illumination power and the number of activated pinching antennas, with a comparison of the proportional power allocation model. For fairness comparison, $\alpha_{\mathrm{s}} = 0.9$ for two power allocation models. As can be observed, the illumination power increases as the number of pinching antennas increases, which is because an increasing number of pinching antennas can improve the beam resolution and reduce the power leakage in irrelevant regions, thereby raising the illumination power at the target. It is also observed that the proportional power allocation is slightly inferior to the equal power allocation model, which verifies the effectiveness of the pinching antennas based on proportional power allocation model in reconfiguring signal propagation. + +Fig. 4 investigates the impact of the rotation angle of the dielectric waveguide on illumination power at the target. + +![](images/2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg) +Fig. 4. The illumination power versus the rotation angle of the dielectric waveguide, where $P_{\mathrm{T}} = 70$ dBm. + +Here, we assume the dielectric waveguide can be rotated in a clockwise direction parallel to the x-o-y plane, where the rotation angle is defined as the angle entwined by the dielectric waveguide and the x-axis. From Fig. 4, it is shown that the illumination power first increases and then decreases as the rotation angle grows. This is due to the fact that when the rotation angle is $60^{\circ}$ , the target is located underneath the dielectric waveguide, and it receives the maximal illumination power. As the rotation angle further rises, the distance between the target and the pinching antenna becomes large, so the illumination power gradually decreases. In addition, raising the height of the dielectric waveguide increases the average distance from the pinching antennas to the user and target, thus, the illumination power decreases as $d$ increases. + +# V. CONCLUSION + +A novel PASS-ISAC framework has been proposed, where the pinching beamforming was exploited to realize the simultaneous C&S transmission. A separated ISAC design was proposed for the two-waveguide PASS. A penalty-based AO algorithm was proposed to maximize the illumination power at the target while guaranteeing the QoS requirement of the communication user. Simulation results were provided to verify the superiority of the proposed PASS-ISAC framework over the other baseline schemes. + +# REFERENCES + +[1] L. Zhu, W. Ma, and R. Zhang, "Movable antennas for wireless communication: Opportunities and challenges," IEEE Commun. Mag., vol. 62, no. 6, pp. 114-120, Jun. 2024. +[2] W. K. New, K.-K. Wong et al., "A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hardware designs," IEEE Commun. Surv. Tut., pp. 1-1, 2024. +[3] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, "Pinching antenna: Using a dielectric waveguide as an antenna," NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022. +[4] Z. Ding, R. Schober, and H. Vincent Poor, "Flexible-antenna systems: A pinching-antenna perspective," IEEE Trans. Commun., pp. 1-1, 2025. +[5] F. Liu, Y. Cui et al., "Integrated sensing and communications: Toward dual-functional wireless networks for 6G and beyond," IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1728-1767, Jun. 2022. +[6] Y. Liu, Z. Wang, X. Mu, C. Ouyang, X. Xu, and Z. Ding, “Pinching antenna systems (PASS): Architecture designs, opportunities, and outlook,” arXiv preprint arXiv:2501.18409, 2025. +[7] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, "Modeling and beamforming optimization for pinching-antenna systems," arXiv preprint arXiv:2502.05917, 2025. + +[8] W. Hao, H. Shi et al., "Joint beamforming design for active RIS-aided THz ISAC systems with delay alignment modulation," IEEE Wireless Communications Letters, vol. 12, no. 10, pp. 1816-1820, Oct. 2023. +[9] T. Jiang and Y. Shi, "Over-the-air computation via intelligent reflecting surfaces," in Proc. IEEE Global Commun. Conf. (GLOBECOM), Waikoloa, HI, USA. Dec. 2019, pp. 1-6. +[10] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 20-34, May. 2010. \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07709/images/0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg b/data/2025/2504_07xxx/2504.07709/images/0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5be561ab916148c829ac71c28ef005772f67d36c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43ac0e7db2514e3807f06e1edf758a38ee352cc093d4fc61ac426d0935782f02 +size 33302 diff --git a/data/2025/2504_07xxx/2504.07709/images/0cd05bfc1d634920178ba31cf939c29d09b081f0ca0c717e59c54296cc2e19fc.jpg b/data/2025/2504_07xxx/2504.07709/images/0cd05bfc1d634920178ba31cf939c29d09b081f0ca0c717e59c54296cc2e19fc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..19ac13cb2c477c5730cc5b7384e8142f83dffef5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/0cd05bfc1d634920178ba31cf939c29d09b081f0ca0c717e59c54296cc2e19fc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:152ab75e4d0d2b6386ffdb50adccb1f64975fbf46051cb0bcc5ea8f6b0ec595c +size 6090 diff --git a/data/2025/2504_07xxx/2504.07709/images/0d0bfa337aa8f8602c2847902b1f1035946193fdb9f65d96e7b2073df4156167.jpg b/data/2025/2504_07xxx/2504.07709/images/0d0bfa337aa8f8602c2847902b1f1035946193fdb9f65d96e7b2073df4156167.jpg new file mode 100644 index 0000000000000000000000000000000000000000..77972ee2261be07d2ed0edd2956749becf282e61 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/0d0bfa337aa8f8602c2847902b1f1035946193fdb9f65d96e7b2073df4156167.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d748f6e43e18f30793c42aa0dd0c8509c8b3904f2886dfb3218e12f5240a19b3 +size 4178 diff --git a/data/2025/2504_07xxx/2504.07709/images/10659bc1f686405d24c46ece702f4f5a22d99829ebb726e35de62e945de9e404.jpg b/data/2025/2504_07xxx/2504.07709/images/10659bc1f686405d24c46ece702f4f5a22d99829ebb726e35de62e945de9e404.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8fc509cb0b1e73b7a9bf626680ecee854a04e336 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/10659bc1f686405d24c46ece702f4f5a22d99829ebb726e35de62e945de9e404.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc2ddcf08aea597158626ce9a21ed906a6a2970881fa7372a6276ac98ea4f902 +size 4436 diff --git a/data/2025/2504_07xxx/2504.07709/images/12547bb6eaaaedbf00a180816e4c5b8550213d9a6a61232a0db20328d835e8a0.jpg b/data/2025/2504_07xxx/2504.07709/images/12547bb6eaaaedbf00a180816e4c5b8550213d9a6a61232a0db20328d835e8a0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c9cdf4eaab470f5554e963ec2f3ed4c12ab53592 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/12547bb6eaaaedbf00a180816e4c5b8550213d9a6a61232a0db20328d835e8a0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d9ccb3eedbc641d3a9f774fcbbd2a473366acb53877f125e8b12f404022a59b +size 10702 diff --git a/data/2025/2504_07xxx/2504.07709/images/125e60949eb9379d4d0a4256a223f273bd5c2502b8cd3b35781931e7a3f4910a.jpg b/data/2025/2504_07xxx/2504.07709/images/125e60949eb9379d4d0a4256a223f273bd5c2502b8cd3b35781931e7a3f4910a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ae5e0ba5b703ba68281b396d1081f69ceda78929 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/125e60949eb9379d4d0a4256a223f273bd5c2502b8cd3b35781931e7a3f4910a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2c07f9d51591e77fd4f6db9b7183dbec8739bcaee636d9cfb08ed839b4a23ac +size 3642 diff --git a/data/2025/2504_07xxx/2504.07709/images/1b45b2924f3fd5b8cbd468a2c15619a6b18ab7cd6c1ec562efd09b8cbe2977e8.jpg b/data/2025/2504_07xxx/2504.07709/images/1b45b2924f3fd5b8cbd468a2c15619a6b18ab7cd6c1ec562efd09b8cbe2977e8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0d9de312779c09987f381b2992d8b6c75d01d69f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/1b45b2924f3fd5b8cbd468a2c15619a6b18ab7cd6c1ec562efd09b8cbe2977e8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f61099912728ffd0b50aeecc3b3c6dce379f6364cc4da74a4ec5ebe527f8571 +size 9199 diff --git a/data/2025/2504_07xxx/2504.07709/images/1c50fb2006519fbd03255a339efa31153dc0aeb93402312de991b136218151d0.jpg b/data/2025/2504_07xxx/2504.07709/images/1c50fb2006519fbd03255a339efa31153dc0aeb93402312de991b136218151d0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2d62c5e1590d97be9e82c061acca365b681e4679 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/1c50fb2006519fbd03255a339efa31153dc0aeb93402312de991b136218151d0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbc80fdc59713691de08eda5a44de2d526f0c1fb58bcd014c816d1705ac629cf +size 8265 diff --git a/data/2025/2504_07xxx/2504.07709/images/267f986c946e20400fbb29f4c31e9636b84ebfccfbdc58164104057df175ee2c.jpg b/data/2025/2504_07xxx/2504.07709/images/267f986c946e20400fbb29f4c31e9636b84ebfccfbdc58164104057df175ee2c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..64fbd14c856f9939feed116265915f7257d9e8c6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/267f986c946e20400fbb29f4c31e9636b84ebfccfbdc58164104057df175ee2c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:038f36942fdacc9d11aab23e4ba174bdbbc987806cd4d064bf6a39f432f9fd31 +size 3727 diff --git a/data/2025/2504_07xxx/2504.07709/images/2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg b/data/2025/2504_07xxx/2504.07709/images/2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fe1a1c63496881f0088c4549688234e0cb67353c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c3e08a2f5d86ae604d4e5b6ff011f789b62bd8df93492adcc3f7246f705d422 +size 24788 diff --git a/data/2025/2504_07xxx/2504.07709/images/2f18586d75cf9e7fd48594f16bf99b45248229284fcb7d96931b9b3068906a0d.jpg b/data/2025/2504_07xxx/2504.07709/images/2f18586d75cf9e7fd48594f16bf99b45248229284fcb7d96931b9b3068906a0d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..486fc0d5405cd020dae0f285858500641a58494d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/2f18586d75cf9e7fd48594f16bf99b45248229284fcb7d96931b9b3068906a0d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84b1469be2e9c089a876caa872ade6230da8a5121d1a725e255cd5ff3bbca13d +size 5209 diff --git a/data/2025/2504_07xxx/2504.07709/images/31cafb3f3f7eef6b2162456f78c353a5ffb95c9e6722cbd6595fbdb3586f675e.jpg b/data/2025/2504_07xxx/2504.07709/images/31cafb3f3f7eef6b2162456f78c353a5ffb95c9e6722cbd6595fbdb3586f675e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b25e804a2f7187b568a7fd422fe0561fa1eb7909 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/31cafb3f3f7eef6b2162456f78c353a5ffb95c9e6722cbd6595fbdb3586f675e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fec34f103e511d4cf20d3a89974fae6479a96d58207090123b1d784346d9bb8 +size 2294 diff --git a/data/2025/2504_07xxx/2504.07709/images/44d3f1e69a13fe50d6f690862446c593f94bfd468aa2855d84818a2e36ae7b67.jpg b/data/2025/2504_07xxx/2504.07709/images/44d3f1e69a13fe50d6f690862446c593f94bfd468aa2855d84818a2e36ae7b67.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a41505cbcafa7fb0b9433520aceef05b0b5e6026 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/44d3f1e69a13fe50d6f690862446c593f94bfd468aa2855d84818a2e36ae7b67.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d0f9c0273dd330825cf4f45ea51a6afd63276f222f47411a102bfa30b13e2f9d +size 8851 diff --git a/data/2025/2504_07xxx/2504.07709/images/4e5b5f680cd94e2f642a0947f3f836ae452d6c10760d19e815c406e8ea43496f.jpg b/data/2025/2504_07xxx/2504.07709/images/4e5b5f680cd94e2f642a0947f3f836ae452d6c10760d19e815c406e8ea43496f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0e507c339b0ffdefe830d3aa0a5ace385f8f1c75 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/4e5b5f680cd94e2f642a0947f3f836ae452d6c10760d19e815c406e8ea43496f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e41a9f4a04a517a0214f1545973a531a4e0e3079807286f5dbd6d48e7eca279f +size 3187 diff --git a/data/2025/2504_07xxx/2504.07709/images/4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg b/data/2025/2504_07xxx/2504.07709/images/4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg new file mode 100644 index 0000000000000000000000000000000000000000..12967eaedde31bf1025e4f91e9fad4a0f2c2e1a8 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77c3fece2baefd70d9b562b80848172ab357a1ec02db5a1377c629cdfe261818 +size 27068 diff --git a/data/2025/2504_07xxx/2504.07709/images/4f6e6d66b2abc99b271998e3d21a3ed90eea2071b291a6210ea2984e101ee612.jpg b/data/2025/2504_07xxx/2504.07709/images/4f6e6d66b2abc99b271998e3d21a3ed90eea2071b291a6210ea2984e101ee612.jpg new file mode 100644 index 0000000000000000000000000000000000000000..34746e01dd0d6bf39b02fe8e7c7ba4f5ee0f0287 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/4f6e6d66b2abc99b271998e3d21a3ed90eea2071b291a6210ea2984e101ee612.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69ec28e06297cb7920c0d2659fa4d93eaabbd91734766544f39aac49888964f8 +size 7211 diff --git a/data/2025/2504_07xxx/2504.07709/images/5351f8bfd71d7927d94f229d2bb6428e97f727ef3ba8e9185f311a349a2945cb.jpg b/data/2025/2504_07xxx/2504.07709/images/5351f8bfd71d7927d94f229d2bb6428e97f727ef3ba8e9185f311a349a2945cb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..22d184f6c56a4ae4eed8cb729f019699bff61d54 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/5351f8bfd71d7927d94f229d2bb6428e97f727ef3ba8e9185f311a349a2945cb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25bd27b483fb8684d3f87e640baa6f239c79802086daaee5e955aa4d635ae193 +size 4143 diff --git a/data/2025/2504_07xxx/2504.07709/images/5709b76dd976bfde151e61d84d403c4585e3940ea125b68ae4baac5d20a6e9a0.jpg b/data/2025/2504_07xxx/2504.07709/images/5709b76dd976bfde151e61d84d403c4585e3940ea125b68ae4baac5d20a6e9a0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..407bfdb4a2d949a1abad1e2c54fe622607e1ff6b --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/5709b76dd976bfde151e61d84d403c4585e3940ea125b68ae4baac5d20a6e9a0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4047bdd226f4e88d6bf09f559bdb954ee5ce32a7b9f1c9fe9ffe1b849546331c +size 3161 diff --git a/data/2025/2504_07xxx/2504.07709/images/59813c5544a2f6ef4acbf6bcb0793a89adac5c4c416df3f8422387c4c65a7709.jpg b/data/2025/2504_07xxx/2504.07709/images/59813c5544a2f6ef4acbf6bcb0793a89adac5c4c416df3f8422387c4c65a7709.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2038e76ece003a4f14d432a2bd4e5e85ab0184bd --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/59813c5544a2f6ef4acbf6bcb0793a89adac5c4c416df3f8422387c4c65a7709.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9e829312c02d1e47a691cc722406df3b1f7c48f06385e7c3025c6ed66b4ad1d +size 5596 diff --git a/data/2025/2504_07xxx/2504.07709/images/5b50ff7eb5c84c71768b9a110f49e45819eac8f5c5d9e671dc5c4ee71e6f973d.jpg b/data/2025/2504_07xxx/2504.07709/images/5b50ff7eb5c84c71768b9a110f49e45819eac8f5c5d9e671dc5c4ee71e6f973d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9ef5d4edc23877e5af20690aa834eef95074074f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/5b50ff7eb5c84c71768b9a110f49e45819eac8f5c5d9e671dc5c4ee71e6f973d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:457870d39c50a60a4f2585be3bab59c74be80dbf31cb4bedce3c5652a2a0424f +size 3781 diff --git a/data/2025/2504_07xxx/2504.07709/images/5ccbbcfdd3f9939d9ef51402ce759971169e69e8bb4ed76cb4025b955edfd91e.jpg b/data/2025/2504_07xxx/2504.07709/images/5ccbbcfdd3f9939d9ef51402ce759971169e69e8bb4ed76cb4025b955edfd91e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..daa74e46bd78989b652c14a35bcc9103651dbf75 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/5ccbbcfdd3f9939d9ef51402ce759971169e69e8bb4ed76cb4025b955edfd91e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76f553518c01a3b275a09ca3479d6ef3c69bc067b216f8e094d1869eed04dd1e +size 10714 diff --git a/data/2025/2504_07xxx/2504.07709/images/60c1933de76e2e689a89efa5c67cb1c22dc781f49c9adfb58ad66ada202cfe31.jpg b/data/2025/2504_07xxx/2504.07709/images/60c1933de76e2e689a89efa5c67cb1c22dc781f49c9adfb58ad66ada202cfe31.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e8f31ba7cf9eaa36993755450b9b214f863dbdc4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/60c1933de76e2e689a89efa5c67cb1c22dc781f49c9adfb58ad66ada202cfe31.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4a7ad20f8878957579338ad1d14d20d696895e0690a866e6e6626ff04b1a2b9 +size 4356 diff --git a/data/2025/2504_07xxx/2504.07709/images/61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg b/data/2025/2504_07xxx/2504.07709/images/61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b24f3e33b167744a93a44dd2b6341c9e6e6a6f43 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7bc134964bbefac5a1ee8be0b157b943e60e63d5654ff13a4783f8bb6cfd52e +size 26199 diff --git a/data/2025/2504_07xxx/2504.07709/images/62143ca4e201778589ee99328eb4964e4a05eae5174a356be01876c847d1bbba.jpg b/data/2025/2504_07xxx/2504.07709/images/62143ca4e201778589ee99328eb4964e4a05eae5174a356be01876c847d1bbba.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8ad6e33d39bc6aaf4ae17bcfb5c56be2fff9a25c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/62143ca4e201778589ee99328eb4964e4a05eae5174a356be01876c847d1bbba.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab9da45b563bf871cd86dde029445a5120c984a0c31bcbc4a4899a584a2ae2c7 +size 2168 diff --git a/data/2025/2504_07xxx/2504.07709/images/68a4d213baad4e98e487138278ae58a9873d59c8c44a1071d8bba9cce87a2644.jpg b/data/2025/2504_07xxx/2504.07709/images/68a4d213baad4e98e487138278ae58a9873d59c8c44a1071d8bba9cce87a2644.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4c577447b6b549f89c5d2b50559bdf45d4f21d2a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/68a4d213baad4e98e487138278ae58a9873d59c8c44a1071d8bba9cce87a2644.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cebe23e6814d294bf3ff8c825cff2cfca00b3d92eaebe3359e0b6e0e5ee7d6b +size 6923 diff --git a/data/2025/2504_07xxx/2504.07709/images/6e7fc8ce43c7aeb93bd7dd52f2b2f2b5d00e9a13e186aafbe76cd4095d51c4a9.jpg b/data/2025/2504_07xxx/2504.07709/images/6e7fc8ce43c7aeb93bd7dd52f2b2f2b5d00e9a13e186aafbe76cd4095d51c4a9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..59e4ceda2bd14c574bdb4b77adc785502a2c414a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/6e7fc8ce43c7aeb93bd7dd52f2b2f2b5d00e9a13e186aafbe76cd4095d51c4a9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d9862de658a6944fe639670aa476f7acdfee78d56b2114ce796e5338f2f106d +size 2206 diff --git a/data/2025/2504_07xxx/2504.07709/images/738c67dc230a6463e0aaa2231528a0df69d3281e4a812677f0af52b3e435023f.jpg b/data/2025/2504_07xxx/2504.07709/images/738c67dc230a6463e0aaa2231528a0df69d3281e4a812677f0af52b3e435023f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..72fbcc58f7111d6f14a5fb522dc2de1f0314a2b5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/738c67dc230a6463e0aaa2231528a0df69d3281e4a812677f0af52b3e435023f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca9523be60361598e36d3dd0f65218e9398b9106961cc3e8728ed3ee2ddf823c +size 13961 diff --git a/data/2025/2504_07xxx/2504.07709/images/7728d9717a57e81579c9d657610dd6e7a115ff27462be55f9efd10b957111971.jpg b/data/2025/2504_07xxx/2504.07709/images/7728d9717a57e81579c9d657610dd6e7a115ff27462be55f9efd10b957111971.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cce78222764e0dbf7cf310d26f45098c07d73ba2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/7728d9717a57e81579c9d657610dd6e7a115ff27462be55f9efd10b957111971.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27116eef230a1cdecbd21020e6c50c21e8c7136c35e6725bc9ada007b9b5e28d +size 9855 diff --git a/data/2025/2504_07xxx/2504.07709/images/7f66a79b90cbcd32015241ca007fe01ddc4fc2e02a246835bbffd5d1616b327f.jpg b/data/2025/2504_07xxx/2504.07709/images/7f66a79b90cbcd32015241ca007fe01ddc4fc2e02a246835bbffd5d1616b327f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d47f2888f3071384a03d57f5a4638e714eb7c5e9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/7f66a79b90cbcd32015241ca007fe01ddc4fc2e02a246835bbffd5d1616b327f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ec7f51ee3cdf79c8cc33dd02f78778438710f7222b675ece636b15c1a03e471 +size 12019 diff --git a/data/2025/2504_07xxx/2504.07709/images/891dc6968ce4cd90565aa6e2e26a74fbb888d3faee69bcaceda2a438c639daac.jpg b/data/2025/2504_07xxx/2504.07709/images/891dc6968ce4cd90565aa6e2e26a74fbb888d3faee69bcaceda2a438c639daac.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fdb14b5fe2baa63f635d78610ccb4689d64274ca --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/891dc6968ce4cd90565aa6e2e26a74fbb888d3faee69bcaceda2a438c639daac.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c96b1153d16b58a609876b09fa9ae2398eeef3e9f0e03a7e9d4c5fb9f7cb9fbb +size 2106 diff --git a/data/2025/2504_07xxx/2504.07709/images/93a370c6e410665d11c7afcd68e9edd8741f8fa32128c183068c7c244df8f96f.jpg b/data/2025/2504_07xxx/2504.07709/images/93a370c6e410665d11c7afcd68e9edd8741f8fa32128c183068c7c244df8f96f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6978b6eb3b847e88c604db56c9b9feeebb512d3f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/93a370c6e410665d11c7afcd68e9edd8741f8fa32128c183068c7c244df8f96f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d0c3fdae54fc4be742b1fb3d58dcd797a93315a20a602de502aec09cf2f66c8 +size 11243 diff --git a/data/2025/2504_07xxx/2504.07709/images/9a7a4e4a13983c89a192c84035e1138e19a154cabf8bcb102d82a4421ba45dc7.jpg b/data/2025/2504_07xxx/2504.07709/images/9a7a4e4a13983c89a192c84035e1138e19a154cabf8bcb102d82a4421ba45dc7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a6a896a1ccf71df137add38acc7ac0a2aff9572a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/9a7a4e4a13983c89a192c84035e1138e19a154cabf8bcb102d82a4421ba45dc7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:265f97483b29c5efe5a362594c28867fd6efbc31202c1a742681c58a14b4cb45 +size 4516 diff --git a/data/2025/2504_07xxx/2504.07709/images/9d78e1ec19a9f92b9b4eda7479a5b49415daeb1595852238ac94459470d4d289.jpg b/data/2025/2504_07xxx/2504.07709/images/9d78e1ec19a9f92b9b4eda7479a5b49415daeb1595852238ac94459470d4d289.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f8c97cd076b71d0a226ee444832f7c7dd6b82249 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/9d78e1ec19a9f92b9b4eda7479a5b49415daeb1595852238ac94459470d4d289.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bc10727dc1bb468a46a4740d4802192a264e0e264a364d80897e0edb5861ac23 +size 5169 diff --git a/data/2025/2504_07xxx/2504.07709/images/b81ecab5ed10f5e50b0f80cbb1d090a6c5ece40f3d156fb05ea8fa3b9f0a6cd8.jpg b/data/2025/2504_07xxx/2504.07709/images/b81ecab5ed10f5e50b0f80cbb1d090a6c5ece40f3d156fb05ea8fa3b9f0a6cd8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a891628f6d4b79401ffabc762f5fd10a0563b573 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/b81ecab5ed10f5e50b0f80cbb1d090a6c5ece40f3d156fb05ea8fa3b9f0a6cd8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfa1c4a5706fc8e40b37ce9f3fdf7f3dbcbd88b8ecf41d06031020411bba4c69 +size 3224 diff --git a/data/2025/2504_07xxx/2504.07709/images/c3c880288547887b44931e6ff294f1da6e7e005bf6d870ca142166b8efed9c73.jpg b/data/2025/2504_07xxx/2504.07709/images/c3c880288547887b44931e6ff294f1da6e7e005bf6d870ca142166b8efed9c73.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7b6b0c314e5f74a2eca2b09edc171f6f690a4332 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/c3c880288547887b44931e6ff294f1da6e7e005bf6d870ca142166b8efed9c73.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68c58721bd9148910dced56db310f572fb2ee2b113c683e68ba0be3ebee696bc +size 3789 diff --git a/data/2025/2504_07xxx/2504.07709/images/c58017bbc3c1e86e1594943c6de4c1d24909a4bfc5c249f9f2073929ee82e4df.jpg b/data/2025/2504_07xxx/2504.07709/images/c58017bbc3c1e86e1594943c6de4c1d24909a4bfc5c249f9f2073929ee82e4df.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c77bf190a1dcb583eb85b4f9bfdd472fcb4a4c3e --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/c58017bbc3c1e86e1594943c6de4c1d24909a4bfc5c249f9f2073929ee82e4df.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96d375c02fd67018cc5f1663005dea4f5e70d9924773290920b880f02bae2794 +size 9740 diff --git a/data/2025/2504_07xxx/2504.07709/images/c5bffdf48a79ca143499849a2ffacee76779c98c1520e9501f414ac403729b4d.jpg b/data/2025/2504_07xxx/2504.07709/images/c5bffdf48a79ca143499849a2ffacee76779c98c1520e9501f414ac403729b4d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d28600dad1b49d525acb25c138ebb518034310db --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/c5bffdf48a79ca143499849a2ffacee76779c98c1520e9501f414ac403729b4d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43890b529641d38458b12bf1120aa62077cd4ea6c6e4db273a5a145ddd96466c +size 2147 diff --git a/data/2025/2504_07xxx/2504.07709/images/c77e00ee0a50d7f0b237bd893660ffac2aa4b62821db415360a305857382c457.jpg b/data/2025/2504_07xxx/2504.07709/images/c77e00ee0a50d7f0b237bd893660ffac2aa4b62821db415360a305857382c457.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b353b8a6510518f009eb0d2c1aefdfb9e8c09324 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/c77e00ee0a50d7f0b237bd893660ffac2aa4b62821db415360a305857382c457.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82bfebda71c8a41793c2f0141670c3c7b2b28e984cf2d578c558a84c526df97d +size 9946 diff --git a/data/2025/2504_07xxx/2504.07709/images/cbc048cc53c8e7d6cdb46ad0611b88179b4ba39d96f7dc4d01a857ace5310960.jpg b/data/2025/2504_07xxx/2504.07709/images/cbc048cc53c8e7d6cdb46ad0611b88179b4ba39d96f7dc4d01a857ace5310960.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c1c52f0de594d06da09facb476f236fea4f64fc5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/cbc048cc53c8e7d6cdb46ad0611b88179b4ba39d96f7dc4d01a857ace5310960.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05f110783985c8d9edec332970865964444ebe957cb42ed7622dd065660af08a +size 2897 diff --git a/data/2025/2504_07xxx/2504.07709/images/d5fac8eba75225050b351775dd0085d4c4fcfd1b8d90f320f836243effb25409.jpg b/data/2025/2504_07xxx/2504.07709/images/d5fac8eba75225050b351775dd0085d4c4fcfd1b8d90f320f836243effb25409.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8ee6f1c6fd7d13c8efb77bfaf4f18f56272ba8fb --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/d5fac8eba75225050b351775dd0085d4c4fcfd1b8d90f320f836243effb25409.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:066ac2526cdd3ebf96fa8e7624e56cd8d01b2ea70a7f158e215cabd71c9ec9f9 +size 4099 diff --git a/data/2025/2504_07xxx/2504.07709/images/d6cf670b12c569359e13553e7b36196518d22fc61de53e9ba41e12e70daae207.jpg b/data/2025/2504_07xxx/2504.07709/images/d6cf670b12c569359e13553e7b36196518d22fc61de53e9ba41e12e70daae207.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d2e1457b9eda673c47f6e8f8ef31c389bde046ab --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/d6cf670b12c569359e13553e7b36196518d22fc61de53e9ba41e12e70daae207.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4058ff9b01c2497ae98d912a0f47ee13f2c830d623e060cf2be22f21df475cbd +size 3649 diff --git a/data/2025/2504_07xxx/2504.07709/images/dd72341bab1fc29e713d6029a7f726ce24cea94f65fa5ee7878e33299a307173.jpg b/data/2025/2504_07xxx/2504.07709/images/dd72341bab1fc29e713d6029a7f726ce24cea94f65fa5ee7878e33299a307173.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0e247173c182a87c4910a7d82e77fd1bd85585bb --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/dd72341bab1fc29e713d6029a7f726ce24cea94f65fa5ee7878e33299a307173.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a11dd968f6b6060cb34e3cacc081160f75694fe982bc4ae6ed05c2786bcd9a3a +size 3875 diff --git a/data/2025/2504_07xxx/2504.07709/images/e7864c38cead7b15306fff924dff4dcaa5c2ba98e2c331002c6cae33a84eafec.jpg b/data/2025/2504_07xxx/2504.07709/images/e7864c38cead7b15306fff924dff4dcaa5c2ba98e2c331002c6cae33a84eafec.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cb7e91bbcac3673dedb62c34577c1e97fc4511f1 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/e7864c38cead7b15306fff924dff4dcaa5c2ba98e2c331002c6cae33a84eafec.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29a74fa0a0b4792e1d11a4a1fac023690f7af9be4d29e1b6447bd170cb7f9e57 +size 2894 diff --git a/data/2025/2504_07xxx/2504.07709/images/fc88d6bd051c2af7cce7c4f4c8ed0497ab6d465a2608c8069bec72ec5fc0122b.jpg b/data/2025/2504_07xxx/2504.07709/images/fc88d6bd051c2af7cce7c4f4c8ed0497ab6d465a2608c8069bec72ec5fc0122b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7e066b50c54b97974446b7bf4ea1e9f8ab881c18 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/images/fc88d6bd051c2af7cce7c4f4c8ed0497ab6d465a2608c8069bec72ec5fc0122b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1da098fc230b0b1b76f8af674578986e4a0348accce7e4f0d0990ed2e6a47048 +size 2341 diff --git a/data/2025/2504_07xxx/2504.07709/layout.json b/data/2025/2504_07xxx/2504.07709/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..deefb4960232328a1a463c0231e1ac6b3e5a634d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07709/layout.json @@ -0,0 +1,7838 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 86, + 55, + 523, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 55, + 523, + 95 + ], + "spans": [ + { + "bbox": [ + 86, + 55, + 523, + 95 + ], + "type": "text", + "content": "Integrated Sensing and Communications for Pinching-Antenna Systems (PASS)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 101, + 488, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 101, + 488, + 114 + ], + "spans": [ + { + "bbox": [ + 105, + 101, + 488, + 114 + ], + "type": "text", + "content": "Zheng Zhang, Zhaolin Wang, Xidong Mu Bingtao He, Jian Chen, and Yuanwei Liu" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 45, + 130, + 301, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 130, + 301, + 261 + ], + "spans": [ + { + "bbox": [ + 45, + 130, + 301, + 261 + ], + "type": "text", + "content": "Abstract—An integrated sensing and communication (ISAC) design for pinching antenna systems (PASS) is proposed, where the pinching antennas are deployed to establish reliable line-of-sight communication and sensing links. More particularly, a separated ISAC design is proposed for the two-waveguide PASS, where one waveguide is used to emit the information-bearing signals for ISAC transmission while the other waveguide is used to receive the reflected echo signals. Based on this framework, a penalty-based alternating optimization algorithm is proposed to maximize the illumination power as well as ensure the communication quality-of-service requirement. Numerical results demonstrate that the proposed PASS-ISAC scheme outperforms the conventional antenna scheme." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 46, + 266, + 301, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 266, + 301, + 287 + ], + "spans": [ + { + "bbox": [ + 46, + 266, + 301, + 287 + ], + "type": "text", + "content": "Index Terms—Beamforming design, integrated sensing and communication, pinching antenna systems." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 308, + 215, + 319 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 308, + 215, + 319 + ], + "spans": [ + { + "bbox": [ + 132, + 308, + 215, + 319 + ], + "type": "text", + "content": "I. INTRODUCTION" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 45, + 325, + 301, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 325, + 301, + 601 + ], + "spans": [ + { + "bbox": [ + 45, + 325, + 301, + 601 + ], + "type": "text", + "content": "Fuelled by the burgeoning demands for massive data transmission and pervasive network coverage, flexible antennas have emerged as a promising technique for sixth-generation (6G) cellular systems. Benefiting from their ability to reconfigure the wireless channel, flexible antennas can significantly enhance the throughput of wireless networks. However, traditional flexible antennas (e.g., movable antennas [1] and fluid antennas [2]) merely permit the adjustment of the antenna position within a range of orders of magnitude comparable to the carrier wavelength. Against this backdrop, the pinching antenna has emerged [3], which is a type of dielectric waveguide-based leaky wave antenna. By applying dielectric particles to a particular point on the dielectric waveguide, a pinching antenna can be activated to establish EM radiation fields and form a communication area [4]. Then, the EM signal inside the dielectric waveguide will be radiated from the pinching antenna to free space with a defined phase shift adjustment (referred to as the pinching beamformer). Notably, as the dielectric waveguide can be pinched at any position to radiate radio waves, the pinching antenna can flexibly move along the dielectric waveguide over a length of dozens of meters, thereby relocating to the closest position to the receiver and creating reliable LoS links." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 601, + 301, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 601, + 301, + 637 + ], + "spans": [ + { + "bbox": [ + 45, + 601, + 301, + 637 + ], + "type": "text", + "content": "To enable emerging applications, such as autonomous driving, extended reality, and the Metaverse, sensing functionality is recognized as an important indicator of future networks." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 647, + 301, + 749 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 45, + 647, + 301, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 647, + 301, + 684 + ], + "spans": [ + { + "bbox": [ + 45, + 647, + 301, + 684 + ], + "type": "text", + "content": "Zheng Zhang, Bingtao He, and Jian Chen are with the School of Telecommunications Engineering, Xidian University, Xi'an 710071, China (e-mail: zhang_688@stu.xidian.edu.cn; bthe@xidian.edu.cn; jianchen@mail.xidian.edu.cn)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 45, + 684, + 301, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 684, + 301, + 711 + ], + "spans": [ + { + "bbox": [ + 45, + 684, + 301, + 711 + ], + "type": "text", + "content": "Zhaolin Wang is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. (e-mail: zhaolin.wang@qmul.ac.uk)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 45, + 711, + 301, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 711, + 301, + 729 + ], + "spans": [ + { + "bbox": [ + 45, + 711, + 301, + 729 + ], + "type": "text", + "content": "Xidong Mu is with Queen's University Belfast, Belfast, BT3 9DT, U.K. (email: x.mu@qub.ac.uk)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 45, + 729, + 301, + 749 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 729, + 301, + 749 + ], + "spans": [ + { + "bbox": [ + 45, + 729, + 301, + 749 + ], + "type": "text", + "content": "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk)." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 315, + 130, + 559, + 253 + ], + "blocks": [ + { + "bbox": [ + 315, + 130, + 559, + 253 + ], + "lines": [ + { + "bbox": [ + 315, + 130, + 559, + 253 + ], + "spans": [ + { + "bbox": [ + 315, + 130, + 559, + 253 + ], + "type": "image", + "image_path": "0b2360b47c060dad9e520bb285cea2d5b27573ca11ad770199a091aea89544a9.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 309, + 260, + 464, + 271 + ], + "lines": [ + { + "bbox": [ + 309, + 260, + 464, + 271 + ], + "spans": [ + { + "bbox": [ + 309, + 260, + 464, + 271 + ], + "type": "text", + "content": "Fig. 1. The separated ISAC design for PASS." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "bbox": [ + 307, + 284, + 564, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 284, + 564, + 512 + ], + "spans": [ + { + "bbox": [ + 307, + 284, + 564, + 512 + ], + "type": "text", + "content": "In pursuit of this vision, the integrated sensing and communication (ISAC) technology has drawn significant attention recently [5], which aims to leverage the cellular network hardware platforms and dedicated signal processing algorithms to achieve the incorporation of communication and sensing functionalities. Recently, it has been claimed that conducting ISAC transmission in the pinching antenna systems (PASS) can further upgrade the communication and sensing (C&S) performance of the network [6]. On the one hand, the pinching antenna can be flexibly repositioned to augment the echo signal energy. On the other hand, the wide-range mobility characteristic of pinching antennas results in an antenna aperture spanning dozens of meters. It inherently enables nearfield sensing, e.g., the possibility of simultaneous angular and distance information estimation and even target velocity sensing, thereby offering a more comprehensive and accurate sensing of the surrounding environment. Nevertheless, as of the present moment, research in the PASS-ISAC remains conspicuously absent." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 512, + 564, + 680 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 512, + 564, + 680 + ], + "spans": [ + { + "bbox": [ + 308, + 512, + 564, + 680 + ], + "type": "text", + "content": "Motivated by the above, this paper proposes a separated ISAC design for PASS. To elaborate, the base station (BS) is connected with two dielectric waveguides, where one waveguide is used to transmit the downlink signals, while the other is employed to collect the reflected echo signals from the target. We aim to maximize the illumination power at the target while satisfying the quality-of-service (QoS) requirement of the communication user by optimizing the pinching beamforming offered by the mobility of pinching antennas. A penalty-based alternating optimization (AO) algorithm is proposed to handle the non-convex optimization problem, where the positions of pinching antennas are updated in an element-wise manner. Numerical results evaluate the superiority of the proposed scheme over the baseline schemes." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 325, + 696, + 547, + 707 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 696, + 547, + 707 + ], + "spans": [ + { + "bbox": [ + 325, + 696, + 547, + 707 + ], + "type": "text", + "content": "II. SYSTEM MODEL AND PROBLEM FORMULATION" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 712, + 564, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 712, + 564, + 750 + ], + "spans": [ + { + "bbox": [ + 308, + 712, + 564, + 750 + ], + "type": "text", + "content": "As shown in Fig. 1, we consider a PASS-ISAC system, where a dual-function BS conveys with a single-antenna communication user while sensing a point-like target. The" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 14, + 190, + 36, + 539 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 190, + 36, + 539 + ], + "spans": [ + { + "bbox": [ + 14, + 190, + 36, + 539 + ], + "type": "text", + "content": "arXiv:2504.07709v3 [cs.IT] 12 May 2025" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "spans": [ + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "text", + "content": "BS is connected with two dielectric waveguides of length " + }, + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "text", + "content": ", each of which consists of " + }, + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 54, + 301, + 162 + ], + "type": "text", + "content": " pinching antennas. To achieve the simultaneous C&S transmission, a separated ISAC design is proposed. Specifically, the downlink information-bearing signals are emitted from one waveguide (referred to as transmitting antennas). Then, the reflected echoes from the target would be collected at the other waveguide (referred to as receiving antennas), which are further transmitted to the BS for parameter estimation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "spans": [ + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": "A three-dimensional (3D) coordination system is considered, where two dielectric waveguides extended from the BS are assumed to be parallel to the x-axis with respect to the x-o-y plane at a height " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": ". The position of the " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": "-th pinching antenna distributed along the transmitting and receiving dielectric waveguides can be denoted as " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "\\psi_{n}^{\\mathrm{p}} = (x_{n}^{\\mathrm{p}},0,d)" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "\\psi_{n}^{\\mathrm{q}} = (x_{n}^{\\mathrm{q}},y^{\\mathrm{q}},d)" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": ". The communication user and sensing target are located in the x-o-y plane. Let " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{c}}" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "\\varphi_{\\mathrm{c}}" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": " denote the distance and the azimuth angle of the communication user relative to the origin of the coordinate system. Thus, the coordinates of communication user is given by " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "\\psi^{\\mathrm{c}} = (r_{\\mathrm{c}}\\cos \\varphi_{\\mathrm{c}},r_{\\mathrm{c}}\\sin \\varphi_{\\mathrm{c}},0)" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": ". Similarly, the target is located in " + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "inline_equation", + "content": "\\psi^{\\mathrm{s}} = (r_{\\mathrm{s}}\\cos \\varphi_{\\mathrm{s}},r_{\\mathrm{s}}\\sin \\varphi_{\\mathrm{s}},0)" + }, + { + "bbox": [ + 46, + 163, + 302, + 343 + ], + "type": "text", + "content": ". Furthermore, we assume the target is a static node or moves at a low speed. Thus, the Doppler effect is neglected in this work." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 45, + 357, + 127, + 367 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 357, + 127, + 367 + ], + "spans": [ + { + "bbox": [ + 45, + 357, + 127, + 367 + ], + "type": "text", + "content": "A. Channel Model" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 45, + 372, + 301, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 372, + 301, + 479 + ], + "spans": [ + { + "bbox": [ + 45, + 372, + 301, + 479 + ], + "type": "text", + "content": "In the considered network, the pinching antennas are non-uniformly disposed on the dielectric waveguide covering the entire range of the user's activity, which implies that the aperture of the pinching antennas may have the same order of magnitude as the signal transmission distance. Without loss of accuracy, we adopt the spherical-wave-based nearfield channel model, where only the LoS path is considered. Consequently, the distance from the " + }, + { + "bbox": [ + 45, + 372, + 301, + 479 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 45, + 372, + 301, + 479 + ], + "type": "text", + "content": "-th pinching antenna to the target is given by" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 47, + 484, + 299, + 531 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 484, + 299, + 531 + ], + "spans": [ + { + "bbox": [ + 47, + 484, + 299, + 531 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} r _ {n} ^ {\\zeta} \\left(r _ {\\zeta}, \\varphi_ {\\zeta}\\right) = \\left\\| \\psi^ {\\zeta} - \\psi_ {n} ^ {\\mathrm {p}} \\right\\| \\\\ = \\sqrt {r _ {\\zeta} ^ {2} - 2 r _ {\\zeta} \\cos \\varphi_ {\\zeta} x _ {n} ^ {\\mathrm {p}} + \\left(x _ {n} ^ {\\mathrm {p}}\\right) ^ {2} + d ^ {2}}, \\quad \\zeta \\in \\{\\mathrm {s}, \\mathrm {c} \\}, \\tag {1} \\\\ \\end{array}", + "image_path": "c58017bbc3c1e86e1594943c6de4c1d24909a4bfc5c249f9f2073929ee82e4df.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 45, + 536, + 301, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 536, + 301, + 572 + ], + "spans": [ + { + "bbox": [ + 45, + 536, + 301, + 572 + ], + "type": "text", + "content": "Thus, the free space channel vector from the transmitting antennas to the target and the communication user can be expressed as" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 52, + 576, + 299, + 611 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 576, + 299, + 611 + ], + "spans": [ + { + "bbox": [ + 52, + 576, + 299, + 611 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {\\mathrm {s}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- \\mathcal {I} \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}}{r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {H}, \\tag {2}", + "image_path": "c77e00ee0a50d7f0b237bd893660ffac2aa4b62821db415360a305857382c457.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 52, + 621, + 299, + 657 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 621, + 299, + 657 + ], + "spans": [ + { + "bbox": [ + 52, + 621, + 299, + 657 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {\\mathrm {c}} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {1} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}, \\dots , \\frac {\\eta^ {\\frac {1}{2}} e ^ {- j \\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})}}{r _ {N} ^ {\\mathrm {c}} (r , \\varphi_ {\\mathrm {c}})} \\right] ^ {H}, \\tag {3}", + "image_path": "12547bb6eaaaedbf00a180816e4c5b8550213d9a6a61232a0db20328d835e8a0.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "spans": [ + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^{\\mathrm{p}} = [x_1^{\\mathrm{p}},\\dots ,x_N^{\\mathrm{p}}]" + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": " denotes the coordinates of pinching antennas, " + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "inline_equation", + "content": "\\lambda = \\frac{c}{f_{\\mathrm{c}}}" + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": " denotes the wavelength, " + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "inline_equation", + "content": "f_{\\mathrm{c}}" + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": " is the frequency of the carrier wave, " + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "inline_equation", + "content": "\\eta = \\frac{c^2}{16\\pi^2f_c^2}" + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 45, + 661, + 301, + 711 + ], + "type": "text", + "content": " denotes the speed of light." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 45, + 712, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 712, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 712, + 301, + 750 + ], + "type": "text", + "content": "In this paper, the BS aims to utilize the communication signal to achieve simultaneous communication and target sensing. Consider a coherent time block of length " + }, + { + "bbox": [ + 45, + 712, + 301, + 750 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 45, + 712, + 301, + 750 + ], + "type": "text", + "content": ", the" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "spans": [ + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": "communication channel condition and the sensing parameters are assumed to remain unchanged during one coherent time block. Thus, the emitted signal at the " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": "-th time slot is given by " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "s(t) \\in \\mathbb{C}" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": ", which is assumed to be normalized and independently distributed, i.e., " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "\\mathbb{E}\\{|s(t)|^2\\} = 1" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "\\mathbb{E}\\{s(t)s^*(\\bar{t})\\} = 0" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": ". On receiving " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "s(t)" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": ", the dielectric waveguide radiates the signal " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "\\mathbf{x}(t) = \\sqrt{P_{\\mathrm{T}}} \\mathbf{g}(\\mathbf{x}^{\\mathrm{p}}) s(t)" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "inline_equation", + "content": "\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 54, + 564, + 150 + ], + "type": "text", + "content": " denotes the in-waveguide channel and can be expressed as" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 350, + 157, + 564, + 175 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 350, + 157, + 564, + 175 + ], + "spans": [ + { + "bbox": [ + 350, + 157, + 564, + 175 + ], + "type": "interline_equation", + "content": "\\mathbf {g} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\sqrt {\\alpha_ {1}} e ^ {- \\jmath \\theta_ {1}}, \\dots , \\sqrt {\\alpha_ {N}} e ^ {- \\jmath \\theta_ {N}} \\right] ^ {T}, \\tag {4}", + "image_path": "2f18586d75cf9e7fd48594f16bf99b45248229284fcb7d96931b9b3068906a0d.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "spans": [ + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\theta_{n}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the radiation phase shift at the " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": "-th pinching antenna, and " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "P_{\\mathrm{T}}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the transmit power at the BS. " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\alpha_{n}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the power allocation coefficients at the " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": "-th pinching antenna, which can be modeled as the equal power allocation model " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\sqrt{\\alpha_n} = \\sqrt{\\frac{\\alpha_s}{N}}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " [4] or the proportional power allocation model " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\sqrt{\\alpha_n} = \\delta (\\sqrt{1 - \\delta^2})^{n - 1}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " [7]. " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\delta = \\sqrt{1 - (1 - \\alpha_s)^{\\frac{1}{N}}}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " represents the proportional coefficient, and " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\alpha_{s} = \\sum_{n = 1}^{N}\\alpha_{n}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the radiation coefficient of pinching antennas. For ease of implementation, the equal power allocation model is considered in this paper. " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\theta_{n}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " is defined by " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "2\\pi \\eta_{\\mathrm{eff}}\\frac{\\|\\psi_0^{\\mathrm{p}} - \\psi_n^{\\mathrm{p}}\\|}{\\lambda}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\psi_0^{\\mathrm{p}}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the location of the feed point, and " + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "inline_equation", + "content": "\\eta_{\\mathrm{eff}}" + }, + { + "bbox": [ + 308, + 182, + 564, + 335 + ], + "type": "text", + "content": " denotes the effective refractive index of the dielectric waveguide." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 309, + 354, + 381, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 309, + 354, + 381, + 366 + ], + "spans": [ + { + "bbox": [ + 309, + 354, + 381, + 366 + ], + "type": "text", + "content": "B. Signal Model" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "spans": [ + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": "With the above channel model, it is readily observed that the positions of pinching antennas have a significant impact on both the free space channel " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{h}_{\\mathrm{s}}(\\mathbf{x}^{\\mathrm{p}}), \\mathbf{h}_{\\mathrm{c}}(\\mathbf{x}^{\\mathrm{p}})\\}" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": " and the in-waveguide channel " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{g}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": ". As a result, it becomes possible to establish favorable wireless propagation while manipulating the radiated characteristics of signals by altering the positions of pinching antennas in the PASS. To characterize the two aspects of the signal reconfiguration capabilities of pinching antennas, we refer to it as pinching beamforming in this paper. Let " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": " denote the pinching beamforming for the communication user and the sensing target, which are also the functions of " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^{\\mathrm{p}}" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 307, + 371, + 564, + 516 + ], + "type": "text", + "content": " are given by" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 318, + 521, + 563, + 558 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 521, + 563, + 558 + ], + "spans": [ + { + "bbox": [ + 318, + 521, + 563, + 558 + ], + "type": "interline_equation", + "content": "\\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)}, \\dots , \\frac {e ^ {- j \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right)} \\right] ^ {T}, \\tag {5}", + "image_path": "93a370c6e410665d11c7afcd68e9edd8741f8fa32128c183068c7c244df8f96f.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 320, + 572, + 563, + 608 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 572, + 563, + 608 + ], + "spans": [ + { + "bbox": [ + 320, + 572, + 563, + 608 + ], + "type": "interline_equation", + "content": "\\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) = \\left[ \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {1}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {1}}} r _ {1} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)}, \\dots , \\frac {e ^ {- \\jmath \\left(\\frac {2 \\pi}{\\lambda} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {N}\\right)}}{\\frac {1}{\\sqrt {\\alpha_ {N}}} r _ {N} ^ {\\mathrm {s}} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right)} \\right] ^ {T}. \\tag {6}", + "image_path": "7f66a79b90cbcd32015241ca007fe01ddc4fc2e02a246835bbffd5d1616b327f.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 613, + 564, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 613, + 564, + 673 + ], + "spans": [ + { + "bbox": [ + 308, + 613, + 564, + 673 + ], + "type": "text", + "content": "In this paper, we consider an ideal activation model of the pinching antenna, i.e., continuous activation. It indicates that the pinching antennas can be activated at any position of the dielectric waveguide. Thus, the positions of pinching antennas satisfy" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 679, + 563, + 717 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 679, + 563, + 717 + ], + "spans": [ + { + "bbox": [ + 312, + 679, + 563, + 717 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X} = \\left\\{\\left| x _ {n} ^ {\\mathrm {p}} - x _ {m} ^ {\\mathrm {p}} \\right| \\geq \\Delta x (n \\neq m), x _ {n} ^ {\\mathrm {p}} \\in \\left[ - \\frac {L}{2}, \\frac {L}{2} \\right] \\right\\}, \\tag {7}", + "image_path": "1c50fb2006519fbd03255a339efa31153dc0aeb93402312de991b136218151d0.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 724, + 564, + 749 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 724, + 564, + 749 + ], + "spans": [ + { + "bbox": [ + 308, + 724, + 564, + 749 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 308, + 724, + 564, + 749 + ], + "type": "inline_equation", + "content": "\\Delta x" + }, + { + "bbox": [ + 308, + 724, + 564, + 749 + ], + "type": "text", + "content": " represents the minimum antenna space between two adjacent pinching antennas." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 45, + 55, + 301, + 91 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 55, + 301, + 91 + ], + "spans": [ + { + "bbox": [ + 45, + 55, + 301, + 91 + ], + "type": "text", + "content": "1) Communication Performance Metric: With the aforementioned signal model, the received signals at the communication user are given by" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 94, + 95, + 299, + 127 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 95, + 299, + 127 + ], + "spans": [ + { + "bbox": [ + 94, + 95, + 299, + 127 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} y (t) = \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {c}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) + n (t) \\\\ = \\sqrt {P _ {\\mathrm {T}}} \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) s (t) + n (t), \\tag {8} \\\\ \\end{array}", + "image_path": "44d3f1e69a13fe50d6f690862446c593f94bfd468aa2855d84818a2e36ae7b67.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "spans": [ + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "inline_equation", + "content": "\\pmb {\\eta} = [\\eta^{\\frac{1}{2}},\\dots ,\\eta^{\\frac{1}{2}}]^{T}\\in \\mathbb{C}^{N\\times 1}" + }, + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "text", + "content": " is a constant vector, and " + }, + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "inline_equation", + "content": "n(t)\\sim \\mathcal{CN}(0,\\sigma^2)" + }, + { + "bbox": [ + 45, + 132, + 301, + 182 + ], + "type": "text", + "content": " denotes the additive white Gaussian noise (AWGN) at the communication user. Hence, the achievable rate of the communication user is given by" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 101, + 185, + 301, + 213 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 185, + 301, + 213 + ], + "spans": [ + { + "bbox": [ + 101, + 185, + 301, + 213 + ], + "type": "interline_equation", + "content": "R = \\log_ {2} \\left(1 + \\frac {P _ {\\mathrm {T}} \\left| \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\right| ^ {2}}{\\sigma^ {2}}\\right). \\tag {9}", + "image_path": "59813c5544a2f6ef4acbf6bcb0793a89adac5c4c416df3f8422387c4c65a7709.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "spans": [ + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "text", + "content": "2) Sensing Performance Metric: For target sensing, we adopt the illumination power as the performance metric, which characterizes the received sensing signal power at the target [8]. Thus, the illumination power with respect to azimuth angle " + }, + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "inline_equation", + "content": "\\varphi_{\\mathrm{s}}" + }, + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "text", + "content": " and distance " + }, + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{s}}" + }, + { + "bbox": [ + 45, + 216, + 301, + 276 + ], + "type": "text", + "content": " is given by" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 96, + 280, + 299, + 322 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 280, + 299, + 322 + ], + "spans": [ + { + "bbox": [ + 96, + 280, + 299, + 322 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} P _ {\\mathrm {s}} = \\mathbb {E} \\left\\{\\left| \\sqrt {P _ {\\mathrm {T}}} \\mathbf {h} _ {\\mathrm {s}} ^ {H} (\\mathbf {x} ^ {\\mathrm {p}}) \\mathbf {g} (\\mathbf {x} ^ {\\mathrm {p}}) s (t) \\right| ^ {2} \\right\\} \\\\ = P _ {\\mathrm {T}} \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\mathbf {v} ^ {H} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\boldsymbol {\\eta}. \\tag {10} \\\\ \\end{array}", + "image_path": "1b45b2924f3fd5b8cbd468a2c15619a6b18ab7cd6c1ec562efd09b8cbe2977e8.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 46, + 335, + 151, + 346 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 335, + 151, + 346 + ], + "spans": [ + { + "bbox": [ + 46, + 335, + 151, + 346 + ], + "type": "text", + "content": "C. Problem Formulation" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 350, + 301, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 350, + 301, + 398 + ], + "spans": [ + { + "bbox": [ + 45, + 350, + 301, + 398 + ], + "type": "text", + "content": "In this paper, we aim to maximize the illumination power " + }, + { + "bbox": [ + 45, + 350, + 301, + 398 + ], + "type": "inline_equation", + "content": "P(\\theta_{\\mathrm{s}}, r_{\\mathrm{s}})" + }, + { + "bbox": [ + 45, + 350, + 301, + 398 + ], + "type": "text", + "content": " by designing the pinching beamformer, under the transmit power budget and communication QoS requirement, which is given by" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 136, + 403, + 299, + 420 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 403, + 299, + 420 + ], + "spans": [ + { + "bbox": [ + 136, + 403, + 299, + 420 + ], + "type": "interline_equation", + "content": "\\left(\\mathrm {P} 1\\right) \\quad \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} P _ {\\mathrm {s}} \\tag {11a}", + "image_path": "e7864c38cead7b15306fff924dff4dcaa5c2ba98e2c331002c6cae33a84eafec.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 143, + 422, + 299, + 435 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 422, + 299, + 435 + ], + "spans": [ + { + "bbox": [ + 143, + 422, + 299, + 435 + ], + "type": "interline_equation", + "content": "\\text {s . t .} \\quad R \\geq R _ {\\mathrm {Q o S}}, \\tag {11b}", + "image_path": "cbc048cc53c8e7d6cdb46ad0611b88179b4ba39d96f7dc4d01a857ace5310960.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 164, + 437, + 299, + 449 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 437, + 299, + 449 + ], + "spans": [ + { + "bbox": [ + 164, + 437, + 299, + 449 + ], + "type": "interline_equation", + "content": "\\mathbf {x} ^ {\\mathrm {p}} \\in \\mathcal {X}, \\tag {11c}", + "image_path": "c5bffdf48a79ca143499849a2ffacee76779c98c1520e9501f414ac403729b4d.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 45, + 456, + 301, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 456, + 301, + 493 + ], + "spans": [ + { + "bbox": [ + 45, + 456, + 301, + 493 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 456, + 301, + 493 + ], + "type": "inline_equation", + "content": "R_{\\mathrm{QoS}}" + }, + { + "bbox": [ + 45, + 456, + 301, + 493 + ], + "type": "text", + "content": " denotes the QoS requirement of the communication user. The problem (P1) is challenging to solve due to the quadratic objective function and the coupled variables." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 73, + 505, + 274, + 516 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 505, + 274, + 516 + ], + "spans": [ + { + "bbox": [ + 73, + 505, + 274, + 516 + ], + "type": "text", + "content": "III. PINCHING BEAMFORMING OPTIMIZATION" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 45, + 520, + 301, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 520, + 301, + 578 + ], + "spans": [ + { + "bbox": [ + 45, + 520, + 301, + 578 + ], + "type": "text", + "content": "In this section, we focus on the C&S transmission design by optimizing the pinching beamforming. To deal with the coupled optimization variables, a penalty-based AO algorithm is proposed, where " + }, + { + "bbox": [ + 45, + 520, + 301, + 578 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{x}^{\\mathrm{p}}\\}" + }, + { + "bbox": [ + 45, + 520, + 301, + 578 + ], + "type": "text", + "content": " is optimized in an element-wise manner." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 45, + 579, + 301, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 579, + 301, + 603 + ], + "spans": [ + { + "bbox": [ + 45, + 579, + 301, + 603 + ], + "type": "text", + "content": "To facilitate the optimization, we can rewrite the problem (P1) as" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 98, + 608, + 299, + 626 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 98, + 608, + 299, + 626 + ], + "spans": [ + { + "bbox": [ + 98, + 608, + 299, + 626 + ], + "type": "interline_equation", + "content": "\\left(\\mathrm {P} 2\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}} | \\boldsymbol {\\eta} ^ {H} \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\tag {12a}", + "image_path": "60c1933de76e2e689a89efa5c67cb1c22dc781f49c9adfb58ad66ada202cfe31.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 129, + 627, + 299, + 642 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 627, + 299, + 642 + ], + "spans": [ + { + "bbox": [ + 129, + 627, + 299, + 642 + ], + "type": "interline_equation", + "content": "\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {12b}", + "image_path": "0d0bfa337aa8f8602c2847902b1f1035946193fdb9f65d96e7b2073df4156167.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 151, + 643, + 299, + 656 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 643, + 299, + 656 + ], + "spans": [ + { + "bbox": [ + 151, + 643, + 299, + 656 + ], + "type": "interline_equation", + "content": "(1 1 c), \\tag {12c}", + "image_path": "891dc6968ce4cd90565aa6e2e26a74fbb888d3faee69bcaceda2a438c639daac.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 45, + 662, + 141, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 662, + 141, + 677 + ], + "spans": [ + { + "bbox": [ + 45, + 662, + 141, + 677 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 662, + 141, + 677 + ], + "type": "inline_equation", + "content": "\\gamma_{\\mathrm{QoS}} = \\frac{2^{R_{\\mathrm{QoS}} - 1}}{P_{\\mathrm{T}}}" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": "In order to deal with the intractable objective and constraints, we consider a penalty-based two-layer framework. To elaborate, we introduce auxiliary variables " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{w}}" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{v}}" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": " to replace " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": ", respectively. Thus, we have the equality constraints " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 677, + 301, + 750 + ], + "type": "text", + "content": ". By relocating the equality constraint to the objective function and serving as a" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 55, + 563, + 77 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 55, + 563, + 77 + ], + "spans": [ + { + "bbox": [ + 308, + 55, + 563, + 77 + ], + "type": "text", + "content": "penalty term, the problem (P2) can be equivalently rewritten as" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 353, + 84, + 563, + 108 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 84, + 563, + 108 + ], + "spans": [ + { + "bbox": [ + 353, + 84, + 563, + 108 + ], + "type": "interline_equation", + "content": "\\left(\\mathrm {P} 3\\right) \\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}} | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {v}} | ^ {2} - \\frac {1}{2 \\varrho} \\chi_ {1} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {w}}, \\tilde {\\mathbf {v}}\\right) \\tag {13a}", + "image_path": "0cd05bfc1d634920178ba31cf939c29d09b081f0ca0c717e59c54296cc2e19fc.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 389, + 109, + 563, + 124 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 389, + 109, + 563, + 124 + ], + "spans": [ + { + "bbox": [ + 389, + 109, + 563, + 124 + ], + "type": "interline_equation", + "content": "\\text {s . t .} \\quad | \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {w}} | ^ {2} \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {13b}", + "image_path": "c3c880288547887b44931e6ff294f1da6e7e005bf6d870ca142166b8efed9c73.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 411, + 125, + 563, + 151 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 411, + 125, + 563, + 151 + ], + "spans": [ + { + "bbox": [ + 411, + 125, + 563, + 151 + ], + "type": "interline_equation", + "content": "\\left| \\tilde {\\mathbf {w}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {c}} ^ {2}}, \\tag {13c}", + "image_path": "267f986c946e20400fbb29f4c31e9636b84ebfccfbdc58164104057df175ee2c.jpg" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 411, + 152, + 563, + 177 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 411, + 152, + 563, + 177 + ], + "spans": [ + { + "bbox": [ + 411, + 152, + 563, + 177 + ], + "type": "interline_equation", + "content": "\\left| \\tilde {\\mathbf {v}} _ {[ n ]} \\right| ^ {2} \\leq \\frac {1}{N r _ {\\operatorname* {m i n} , \\mathrm {s}} ^ {2}}, \\tag {13d}", + "image_path": "5b50ff7eb5c84c71768b9a110f49e45819eac8f5c5d9e671dc5c4ee71e6f973d.jpg" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 410, + 180, + 563, + 192 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 410, + 180, + 563, + 192 + ], + "spans": [ + { + "bbox": [ + 410, + 180, + 563, + 192 + ], + "type": "interline_equation", + "content": "(1 1 \\mathrm {c}), \\tag {13e}", + "image_path": "62143ca4e201778589ee99328eb4964e4a05eae5174a356be01876c847d1bbba.jpg" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "spans": [ + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "inline_equation", + "content": "\\chi_{1}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}) = \\| \\tilde{\\mathbf{w}} -\\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\| +\\| \\tilde{\\mathbf{v}} -\\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\|" + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "inline_equation", + "content": "\\varrho" + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": " denotes the scaling factor of the penalty terms. Note that to avoid the infinite objective value, we introduce constraints (13c) and (13d), where " + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "inline_equation", + "content": "r_{\\min ,\\mathrm{c}} = \\sqrt{(r_{\\mathrm{c}}\\sin\\varphi_{\\mathrm{c}})^2 + d^2}" + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "inline_equation", + "content": "r_{\\min ,\\mathrm{s}} = \\sqrt{(r_{\\mathrm{s}}\\sin\\varphi_{\\mathrm{s}})^2 + d^2}" + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": " denote the lower bounds of the distances from an arbitrary pinching antenna to the communication user and target. The problem (P3) is equivalent to the problem (P1) as constraints (13c) and (13d) can be obtained from the (11c), which restricts pinching beamforming " + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}" + }, + { + "bbox": [ + 307, + 201, + 564, + 321 + ], + "type": "text", + "content": " to the feasible region." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 308, + 322, + 563, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 322, + 563, + 346 + ], + "spans": [ + { + "bbox": [ + 308, + 322, + 563, + 346 + ], + "type": "text", + "content": "To address the quadratic objective and constraints, we apply the SDR technique to rewrite the problem (P3) as follows." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 330, + 354, + 563, + 378 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 330, + 354, + 563, + 378 + ], + "spans": [ + { + "bbox": [ + 330, + 354, + 563, + 378 + ], + "type": "interline_equation", + "content": "\\max _ {\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {14a}", + "image_path": "4f6e6d66b2abc99b271998e3d21a3ed90eea2071b291a6210ea2984e101ee612.jpg" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 336, + 380, + 563, + 407 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 336, + 380, + 563, + 407 + ], + "spans": [ + { + "bbox": [ + 336, + 380, + 563, + 407 + ], + "type": "interline_equation", + "content": "\\text {s . t .} \\quad \\tilde {\\mathbf {W}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , c} ^ {2}}, \\tag {14b}", + "image_path": "10659bc1f686405d24c46ece702f4f5a22d99829ebb726e35de62e945de9e404.jpg" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 358, + 407, + 563, + 433 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 407, + 563, + 433 + ], + "spans": [ + { + "bbox": [ + 358, + 407, + 563, + 433 + ], + "type": "interline_equation", + "content": "\\tilde {\\mathbf {V}} _ {[ n, n ]} \\leq \\frac {1}{N r _ {\\min , s} ^ {2}}, \\tag {14c}", + "image_path": "dd72341bab1fc29e713d6029a7f726ce24cea94f65fa5ee7878e33299a307173.jpg" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 358, + 434, + 563, + 449 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 434, + 563, + 449 + ], + "spans": [ + { + "bbox": [ + 358, + 434, + 563, + 449 + ], + "type": "interline_equation", + "content": "\\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {W}}\\right) \\geq \\gamma_ {\\mathrm {Q o S}} \\sigma^ {2}, \\tag {14d}", + "image_path": "d5fac8eba75225050b351775dd0085d4c4fcfd1b8d90f320f836243effb25409.jpg" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 358, + 451, + 563, + 464 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 451, + 563, + 464 + ], + "spans": [ + { + "bbox": [ + 358, + 451, + 563, + 464 + ], + "type": "interline_equation", + "content": "\\operatorname {r a n k} (\\tilde {\\mathbf {W}}) = 1, \\operatorname {r a n k} (\\tilde {\\mathbf {V}}) = 1, \\tag {14e}", + "image_path": "5351f8bfd71d7927d94f229d2bb6428e97f727ef3ba8e9185f311a349a2945cb.jpg" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 358, + 466, + 563, + 480 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 466, + 563, + 480 + ], + "spans": [ + { + "bbox": [ + 358, + 466, + 563, + 480 + ], + "type": "interline_equation", + "content": "\\tilde {\\mathbf {W}} \\succeq \\mathbf {0}, \\tilde {\\mathbf {V}} \\succeq \\mathbf {0}, \\tag {14f}", + "image_path": "5709b76dd976bfde151e61d84d403c4585e3940ea125b68ae4baac5d20a6e9a0.jpg" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 358, + 483, + 563, + 496 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 483, + 563, + 496 + ], + "spans": [ + { + "bbox": [ + 358, + 483, + 563, + 496 + ], + "type": "interline_equation", + "content": "(1 1 \\mathrm {c}), \\tag {14g}", + "image_path": "31cafb3f3f7eef6b2162456f78c353a5ffb95c9e6722cbd6595fbdb3586f675e.jpg" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "spans": [ + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{w}^{H}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{W}} = \\tilde{\\mathbf{w}}\\tilde{\\mathbf{w}}^{H}" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}}) = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})\\mathbf{v}^{H}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{V}} = \\tilde{\\mathbf{v}}\\tilde{\\mathbf{v}}^{H}" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\chi_{2}(\\mathbf{x}^{\\mathrm{p}},\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}) = \\| \\tilde{\\mathbf{W}} - \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} - \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": ". To solve the problem (P4), we propose a penalty-based AO algorithm, which alternately optimizes " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{x}^{\\mathrm{p}}\\}" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": " in the inner layer and updates " + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "inline_equation", + "content": "\\varrho" + }, + { + "bbox": [ + 308, + 505, + 564, + 577 + ], + "type": "text", + "content": " in the outer layer." + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "spans": [ + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "text", + "content": "1) Inner layer iteration—subproblem with respect to " + }, + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}" + }, + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "text", + "content": ": With the fixed " + }, + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{x}^{\\mathrm{p}}\\}" + }, + { + "bbox": [ + 309, + 579, + 563, + 613 + ], + "type": "text", + "content": ", the problem (P4) is reduced to" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 337, + 620, + 563, + 646 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 620, + 563, + 646 + ], + "spans": [ + { + "bbox": [ + 337, + 620, + 563, + 646 + ], + "type": "interline_equation", + "content": "\\left(\\mathrm {P} 5\\right) \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) \\tag {15a}", + "image_path": "68a4d213baad4e98e487138278ae58a9873d59c8c44a1071d8bba9cce87a2644.jpg" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 361, + 648, + 563, + 661 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 361, + 648, + 563, + 661 + ], + "spans": [ + { + "bbox": [ + 361, + 648, + 563, + 661 + ], + "type": "interline_equation", + "content": "\\text {s . t .} \\quad (1 4 b) - (1 4 f). \\tag {15b}", + "image_path": "4e5b5f680cd94e2f642a0947f3f836ae452d6c10760d19e815c406e8ea43496f.jpg" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 308, + 670, + 564, + 707 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 670, + 564, + 707 + ], + "spans": [ + { + "bbox": [ + 308, + 670, + 564, + 707 + ], + "type": "text", + "content": "To handle the rank-one constraint, we introduce non-negative auxiliary variables " + }, + { + "bbox": [ + 308, + 670, + 564, + 707 + ], + "type": "inline_equation", + "content": "\\{\\varpi_1,\\varpi_2\\}" + }, + { + "bbox": [ + 308, + 670, + 564, + 707 + ], + "type": "text", + "content": " and employ the difference-of-convex (DC) relaxation method [9] to rewrite the (14c) as" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 320, + 714, + 563, + 746 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 714, + 563, + 746 + ], + "spans": [ + { + "bbox": [ + 320, + 714, + 563, + 746 + ], + "type": "interline_equation", + "content": "\\left\\{ \\begin{array}{l} \\Re (\\operatorname {T r} (\\tilde {\\mathbf {W}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {w}} _ {\\max } \\tilde {\\mathbf {w}} _ {\\max } ^ {H}))) \\leq \\varpi_ {1}, \\\\ \\Re (\\operatorname {T r} (\\tilde {\\mathbf {V}} ^ {H} (\\mathbf {I} - \\tilde {\\mathbf {v}} _ {\\max } \\tilde {\\mathbf {v}} _ {\\max } ^ {H}))) \\leq \\varpi_ {2}, \\end{array} \\quad i \\in \\{1, 2 \\}, \\right. \\tag {16}", + "image_path": "5ccbbcfdd3f9939d9ef51402ce759971169e69e8bb4ed76cb4025b955edfd91e.jpg" + } + ] + } + ], + "index": 41 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 47, + 53, + 272, + 64 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 53, + 272, + 64 + ], + "spans": [ + { + "bbox": [ + 47, + 53, + 272, + 64 + ], + "type": "text", + "content": "Algorithm 1 Iterative algorithm for rank-one solution." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "spans": [ + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "text", + "content": "1: Initialize " + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{v}}_{\\mathrm{max}}" + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{w}}_{\\mathrm{max}}" + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "text", + "content": ". Set a convergence accuracy " + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "inline_equation", + "content": "\\epsilon_{1}" + }, + { + "bbox": [ + 52, + 66, + 299, + 79 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 52, + 91, + 93, + 101 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 91, + 93, + 101 + ], + "spans": [ + { + "bbox": [ + 52, + 91, + 93, + 101 + ], + "type": "text", + "content": "2: repeat" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 52, + 101, + 279, + 150 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 52, + 101, + 279, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 101, + 279, + 114 + ], + "spans": [ + { + "bbox": [ + 52, + 101, + 279, + 114 + ], + "type": "text", + "content": "3: update " + }, + { + "bbox": [ + 52, + 101, + 279, + 114 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}},\\varpi_i\\}" + }, + { + "bbox": [ + 52, + 101, + 279, + 114 + ], + "type": "text", + "content": " by solving the problem (P6)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 53, + 114, + 231, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 114, + 231, + 126 + ], + "spans": [ + { + "bbox": [ + 53, + 114, + 231, + 126 + ], + "type": "text", + "content": "4: update the eigenvectors " + }, + { + "bbox": [ + 53, + 114, + 231, + 126 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{w}}_{\\mathrm{max}},\\tilde{\\mathbf{v}}_{\\mathrm{max}}\\}" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 53, + 126, + 203, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 126, + 203, + 138 + ], + "spans": [ + { + "bbox": [ + 53, + 126, + 203, + 138 + ], + "type": "text", + "content": "5: update " + }, + { + "bbox": [ + 53, + 126, + 203, + 138 + ], + "type": "inline_equation", + "content": "\\varrho_{i} = \\varrho_{i}\\bar{c}_{1}" + }, + { + "bbox": [ + 53, + 126, + 203, + 138 + ], + "type": "inline_equation", + "content": "(0 < \\bar{c}_1 < 1)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "spans": [ + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "text", + "content": "6: until " + }, + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "inline_equation", + "content": "\\sum_{i=1}^{2} \\varpi_i" + }, + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "text", + "content": " falls below a threshold of " + }, + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "inline_equation", + "content": "\\epsilon_1" + }, + { + "bbox": [ + 53, + 137, + 252, + 150 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "spans": [ + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{w}}_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{v}}_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "content": " represent the eigenvectors corresponding to the maximum eigenvalues of " + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{W}}" + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{V}}" + }, + { + "bbox": [ + 45, + 169, + 301, + 205 + ], + "type": "text", + "content": ", respectively. As a result, the problem (P5) can be transformed into" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 210, + 299, + 252 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 210, + 299, + 252 + ], + "spans": [ + { + "bbox": [ + 48, + 210, + 299, + 252 + ], + "type": "interline_equation", + "content": "\\left. \\max _ {\\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}, \\varpi_ {i}} \\operatorname {T r} \\left(\\boldsymbol {\\eta} \\boldsymbol {\\eta} ^ {H} \\tilde {\\mathbf {V}}\\right) - \\frac {1}{2 \\varrho} \\chi_ {2} \\left(\\mathbf {x} ^ {\\mathrm {p}}, \\tilde {\\mathbf {W}}, \\tilde {\\mathbf {V}}\\right) - \\sum_ {i = 1} ^ {2} \\frac {1}{2 \\varrho_ {i}} \\varpi_ {i} \\right. \\tag {17a}", + "image_path": "7728d9717a57e81579c9d657610dd6e7a115ff27462be55f9efd10b957111971.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 90, + 255, + 299, + 269 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 90, + 255, + 299, + 269 + ], + "spans": [ + { + "bbox": [ + 90, + 255, + 299, + 269 + ], + "type": "interline_equation", + "content": "s. t. \\quad \\varpi_ {i} \\geq 0, i \\in \\{1, 2 \\}, \\tag {17b}", + "image_path": "125e60949eb9379d4d0a4256a223f273bd5c2502b8cd3b35781931e7a3f4910a.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 112, + 271, + 299, + 283 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 271, + 299, + 283 + ], + "spans": [ + { + "bbox": [ + 112, + 271, + 299, + 283 + ], + "type": "interline_equation", + "content": "(1 4 b) - (1 4 f), (1 6), \\tag {17c}", + "image_path": "b81ecab5ed10f5e50b0f80cbb1d090a6c5ece40f3d156fb05ea8fa3b9f0a6cd8.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "spans": [ + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "inline_equation", + "content": "\\varrho_{i}" + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "text", + "content": " denotes the scaling factor of " + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "inline_equation", + "content": "\\varpi_{i}" + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "text", + "content": ". The problem (P6) is a convex problem and can be directly solved. Thus, the rank-one solution " + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{W}},\\tilde{\\mathbf{V}}\\}" + }, + { + "bbox": [ + 45, + 290, + 301, + 337 + ], + "type": "text", + "content": " can be obtained by carrying out the Algorithm 1." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "spans": [ + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": "2) Inner layer iteration—subproblem with respect to " + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "inline_equation", + "content": "\\{\\mathbf{x}^p\\}" + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": ": Note that the equality constraint " + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{W}} = \\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{V}} = \\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": " are equivalent to " + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{w}} = \\mathbf{w}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathbf{v}} = \\mathbf{v}(\\mathbf{x}^{\\mathrm{p}})" + }, + { + "bbox": [ + 45, + 338, + 301, + 385 + ], + "type": "text", + "content": ". As a result, the problem (P6) can be transformed into" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 75, + 392, + 299, + 409 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 392, + 299, + 409 + ], + "spans": [ + { + "bbox": [ + 75, + 392, + 299, + 409 + ], + "type": "interline_equation", + "content": "\\left(\\mathrm {P} 7\\right) \\min _ {\\mathbf {x} ^ {\\mathrm {p}}} \\| \\tilde {\\mathbf {w}} - \\mathbf {w} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| + \\| \\tilde {\\mathbf {v}} - \\mathbf {v} \\left(\\mathbf {x} ^ {\\mathrm {p}}\\right) \\| \\tag {18a}", + "image_path": "9d78e1ec19a9f92b9b4eda7479a5b49415daeb1595852238ac94459470d4d289.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 411, + 299, + 422 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 411, + 299, + 422 + ], + "spans": [ + { + "bbox": [ + 105, + 411, + 299, + 422 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\text {s . t .} \\quad (1 1 c). \\end{array} \\tag {18b}", + "image_path": "fc88d6bd051c2af7cce7c4f4c8ed0497ab6d465a2608c8069bec72ec5fc0122b.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "spans": [ + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": "It is easy to notice that " + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "inline_equation", + "content": "x_{n}^{\\mathrm{p}}" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "inline_equation", + "content": "x_{m}^{\\mathrm{p}}" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "inline_equation", + "content": "n \\neq m" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": ") are separated in the objective function but coupled in the constraint (11c), which motivates us to adopt the elementwise optimization framework. Therefore, with the fixed " + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "inline_equation", + "content": "\\{x_{1}^{\\mathrm{p}}, \\dots, x_{n-1}^{\\mathrm{p}}, x_{n+1}^{\\mathrm{p}}, \\dots, x_{N}^{\\mathrm{p}}\\}" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": ", the subproblem with respect to " + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "inline_equation", + "content": "x_{n}^{\\mathrm{p}}" + }, + { + "bbox": [ + 45, + 429, + 301, + 502 + ], + "type": "text", + "content": " is given by" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 69, + 506, + 299, + 571 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 506, + 299, + 571 + ], + "spans": [ + { + "bbox": [ + 69, + 506, + 299, + 571 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\min _ {x _ {n} ^ {\\mathrm {p}}} \\left| \\tilde {\\mathbf {w}} _ {[ n ]} - \\frac {e ^ {- J \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}} , \\varphi_ {\\mathrm {c}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {\\mathrm {c}} \\left(r _ {\\mathrm {c}}, \\varphi_ {\\mathrm {c}}\\right)} \\right| (P8) \\\\ + \\left| \\tilde {\\mathbf {v}} _ {[ n ]} - \\frac {e ^ {- \\mathcal {I} \\left(\\frac {2 \\pi}{\\lambda} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}} , \\varphi_ {\\mathrm {s}}\\right) + \\theta_ {n}\\right)}}{\\sqrt {N} r _ {n} ^ {s} \\left(r _ {\\mathrm {s}}, \\varphi_ {\\mathrm {s}}\\right)} \\right| (19a) \\\\ \\end{array}", + "image_path": "738c67dc230a6463e0aaa2231528a0df69d3281e4a812677f0af52b3e435023f.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 97, + 572, + 299, + 586 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 97, + 572, + 299, + 586 + ], + "spans": [ + { + "bbox": [ + 97, + 572, + 299, + 586 + ], + "type": "interline_equation", + "content": "s. t. \\quad x _ {n - 1} ^ {\\mathrm {p}} + \\Delta x \\leq x _ {n} ^ {\\mathrm {p}} \\leq x _ {n + 1} ^ {\\mathrm {p}} - \\Delta x, \\tag {19b}", + "image_path": "9a7a4e4a13983c89a192c84035e1138e19a154cabf8bcb102d82a4421ba45dc7.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 120, + 586, + 299, + 609 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 120, + 586, + 299, + 609 + ], + "spans": [ + { + "bbox": [ + 120, + 586, + 299, + 609 + ], + "type": "interline_equation", + "content": "\\frac {- L}{2} \\leq x _ {n} ^ {\\mathrm {p}} \\leq \\frac {L}{2}, \\tag {19c}", + "image_path": "d6cf670b12c569359e13553e7b36196518d22fc61de53e9ba41e12e70daae207.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 45, + 614, + 299, + 637 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 614, + 299, + 637 + ], + "spans": [ + { + "bbox": [ + 45, + 614, + 299, + 637 + ], + "type": "text", + "content": "Then, the optimal " + }, + { + "bbox": [ + 45, + 614, + 299, + 637 + ], + "type": "inline_equation", + "content": "x_{n}^{\\mathrm{p}}" + }, + { + "bbox": [ + 45, + 614, + 299, + 637 + ], + "type": "text", + "content": " can be obtained by the low-complexity one-dimensional search." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "spans": [ + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "text", + "content": "3) Outer layer iteration: In the outer layer, we initialise a large " + }, + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "inline_equation", + "content": "\\varrho" + }, + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "text", + "content": " and update " + }, + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "inline_equation", + "content": "\\varrho" + }, + { + "bbox": [ + 45, + 639, + 301, + 663 + ], + "type": "text", + "content": " at each outer iteration by" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 154, + 670, + 299, + 682 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 154, + 670, + 299, + 682 + ], + "spans": [ + { + "bbox": [ + 154, + 670, + 299, + 682 + ], + "type": "interline_equation", + "content": "\\varrho = \\varrho \\bar {c} _ {2}, \\tag {20}", + "image_path": "6e7fc8ce43c7aeb93bd7dd52f2b2f2b5d00e9a13e186aafbe76cd4095d51c4a9.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 45, + 689, + 299, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 689, + 299, + 723 + ], + "spans": [ + { + "bbox": [ + 45, + 689, + 299, + 723 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 689, + 299, + 723 + ], + "type": "inline_equation", + "content": "0 < \\bar{c}_2 < 1" + }, + { + "bbox": [ + 45, + 689, + 299, + 723 + ], + "type": "text", + "content": " is the iteration coefficient of the penalty terms. The penalty-based AO algorithm is summarized in Algorithm 2." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 45, + 724, + 301, + 749 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 724, + 301, + 749 + ], + "spans": [ + { + "bbox": [ + 45, + 724, + 301, + 749 + ], + "type": "text", + "content": "The proposed penalty-based AO algorithm is summarized in Algorithm 2, which is assured to converge at least to a" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 310, + 53, + 487, + 64 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 53, + 487, + 64 + ], + "spans": [ + { + "bbox": [ + 310, + 53, + 487, + 64 + ], + "type": "text", + "content": "Algorithm 2 Penalty-based AO algorithm." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 91, + 356, + 101 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 91, + 356, + 101 + ], + "spans": [ + { + "bbox": [ + 316, + 91, + 356, + 101 + ], + "type": "text", + "content": "2: repeat" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 316, + 102, + 366, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 102, + 366, + 113 + ], + "spans": [ + { + "bbox": [ + 316, + 102, + 366, + 113 + ], + "type": "text", + "content": "3: repeat" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 316, + 66, + 563, + 186 + ], + "type": "list", + "angle": 0, + "index": 35, + "blocks": [ + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "spans": [ + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "text", + "content": "1: Parameter Initialization. Set the convergence accuracy " + }, + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "inline_equation", + "content": "\\epsilon_{2}" + }, + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "inline_equation", + "content": "\\epsilon_{3}" + }, + { + "bbox": [ + 315, + 66, + 563, + 90 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 114, + 533, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 114, + 533, + 125 + ], + "spans": [ + { + "bbox": [ + 316, + 114, + 533, + 125 + ], + "type": "text", + "content": "4: update " + }, + { + "bbox": [ + 316, + 114, + 533, + 125 + ], + "type": "inline_equation", + "content": "\\{\\tilde{\\mathbf{w}},\\tilde{\\mathbf{v}}\\}" + }, + { + "bbox": [ + 316, + 114, + 533, + 125 + ], + "type": "text", + "content": " by carrying out Algorithm 1." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 316, + 126, + 533, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 126, + 533, + 138 + ], + "spans": [ + { + "bbox": [ + 316, + 126, + 533, + 138 + ], + "type": "text", + "content": "5: update " + }, + { + "bbox": [ + 316, + 126, + 533, + 138 + ], + "type": "inline_equation", + "content": "\\mathbf{x}_{\\mathrm{p}}" + }, + { + "bbox": [ + 316, + 126, + 533, + 138 + ], + "type": "text", + "content": " via the element-wise optimization." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 316, + 138, + 563, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 138, + 563, + 161 + ], + "spans": [ + { + "bbox": [ + 316, + 138, + 563, + 161 + ], + "type": "text", + "content": "6: until the objective value converges with an accuracy of " + }, + { + "bbox": [ + 316, + 138, + 563, + 161 + ], + "type": "inline_equation", + "content": "\\epsilon_{2}" + }, + { + "bbox": [ + 316, + 138, + 563, + 161 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 316, + 162, + 459, + 173 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 162, + 459, + 173 + ], + "spans": [ + { + "bbox": [ + 316, + 162, + 459, + 173 + ], + "type": "text", + "content": "7: update " + }, + { + "bbox": [ + 316, + 162, + 459, + 173 + ], + "type": "inline_equation", + "content": "\\varrho = \\varrho \\bar{c}_2" + }, + { + "bbox": [ + 316, + 162, + 459, + 173 + ], + "type": "inline_equation", + "content": "(0 < \\bar{c}_2 < 1)" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 316, + 173, + 523, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 173, + 523, + 186 + ], + "spans": [ + { + "bbox": [ + 316, + 173, + 523, + 186 + ], + "type": "text", + "content": "8: until " + }, + { + "bbox": [ + 316, + 173, + 523, + 186 + ], + "type": "inline_equation", + "content": "\\| \\tilde{\\mathbf{W}} -\\mathbf{W}(\\mathbf{x}^{\\mathrm{p}})\\|_{F} + \\| \\tilde{\\mathbf{V}} -\\mathbf{V}(\\mathbf{x}^{\\mathrm{p}})\\|_{F}\\leq \\epsilon_{3}" + } + ] + } + ], + "index": 34 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 348, + 212, + 517, + 348 + ], + "blocks": [ + { + "bbox": [ + 348, + 212, + 517, + 348 + ], + "lines": [ + { + "bbox": [ + 348, + 212, + 517, + 348 + ], + "spans": [ + { + "bbox": [ + 348, + 212, + 517, + 348 + ], + "type": "image", + "image_path": "61356ba48d70041d60a1887554e812cf3b19b73e0c0702746c81209c9ff71552.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 309, + 357, + 538, + 367 + ], + "lines": [ + { + "bbox": [ + 309, + 357, + 538, + 367 + ], + "spans": [ + { + "bbox": [ + 309, + 357, + 538, + 367 + ], + "type": "text", + "content": "Fig. 2. The illumination power versus the transmit power at the BS." + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + }, + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "spans": [ + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "text", + "content": "stationary point solution. The computational complexity of Algorithm 2 mainly depends on solving the SDP problems (P6) and the one-dimensional exhaustive search. It is given by " + }, + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "inline_equation", + "content": "\\mathcal{O}\\Big(\\log (\\frac{1}{\\epsilon_3})\\log (\\frac{1}{\\epsilon_2})\\big[\\log (\\frac{1}{\\epsilon_1})N^{3.5} + N\\bar{Q}\\big]\\Big)" + }, + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "text", + "content": " [10], where " + }, + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "inline_equation", + "content": "\\bar{Q}" + }, + { + "bbox": [ + 308, + 380, + 564, + 454 + ], + "type": "text", + "content": " represents the number of the quantization bits during the one-dimensional exhaustive search." + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 378, + 475, + 493, + 485 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 475, + 493, + 485 + ], + "spans": [ + { + "bbox": [ + 378, + 475, + 493, + 485 + ], + "type": "text", + "content": "IV. NUMERICAL RESULTS" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "spans": [ + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": "This section evaluates the performance of the proposed PASS-ISAC framework. A 3D topological network setup is considered, where the dielectric waveguide is located in the x-o-z plane with a height of " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": " and a length of " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "50\\mathrm{m}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ". The communicating user and the sensing target are located in a square region centered at the origin in the x-o-y plane. Unless otherwise specified, the default simulation parameters are set as: " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\sigma^2 = -105" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": " dBm, " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "f = 28" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": " GHz, " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "d = 10\\mathrm{m}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{s}} = 30\\mathrm{m}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\varphi_{\\mathrm{s}} = \\frac{\\pi}{3}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{c}} = 15\\sqrt{2}\\mathrm{m}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\varphi_{\\mathrm{c}} = \\frac{5\\pi}{4}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "N = 16" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\eta_{\\mathrm{eff}} = 1.4" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "R_{\\mathrm{QoS}} = 10" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": " bps/Hz, " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\epsilon_1 = \\epsilon_2 = \\epsilon_3 = \\epsilon_4 = 10^{-3}" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{s}} = 1" + }, + { + "bbox": [ + 308, + 493, + 564, + 635 + ], + "type": "text", + "content": ". The other network parameters are shown in the captions of the figures." + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 308, + 637, + 563, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 637, + 563, + 662 + ], + "spans": [ + { + "bbox": [ + 308, + 637, + 563, + 662 + ], + "type": "text", + "content": "To validate the performance of the proposed scheme, the following baseline schemes are considered in this paper:" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "spans": [ + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "text", + "content": "- Conventional antenna: In this scheme, we deploy " + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "text", + "content": " conventional uniform linear array (ULA) at the BS as the transmitting antenna with an antenna spacing of " + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "inline_equation", + "content": "\\frac{\\lambda}{2}" + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "text", + "content": ". For fairness comparison, the transmitting antennas are connected to one RF chain and each antenna is associated with an analog phase shifter, which can be varied from 0 to " + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "inline_equation", + "content": "2\\pi" + }, + { + "bbox": [ + 318, + 665, + 564, + 747 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 42 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 85, + 63, + 253, + 198 + ], + "blocks": [ + { + "bbox": [ + 85, + 63, + 253, + 198 + ], + "lines": [ + { + "bbox": [ + 85, + 63, + 253, + 198 + ], + "spans": [ + { + "bbox": [ + 85, + 63, + 253, + 198 + ], + "type": "image", + "image_path": "4eb38f1e6d1bd663e4b738b0db05d681352bb5024fd01a86624fecd3fbd0f774.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 45, + 207, + 299, + 227 + ], + "lines": [ + { + "bbox": [ + 45, + 207, + 299, + 227 + ], + "spans": [ + { + "bbox": [ + 45, + 207, + 299, + 227 + ], + "type": "text", + "content": "Fig. 3. The illumination power versus the rotation angle of the dielectric waveguide, where " + }, + { + "bbox": [ + 45, + 207, + 299, + 227 + ], + "type": "inline_equation", + "content": "P_{\\mathrm{T}} = 70" + }, + { + "bbox": [ + 45, + 207, + 299, + 227 + ], + "type": "text", + "content": " dBm." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 232, + 301, + 387 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 56, + 232, + 301, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 232, + 301, + 290 + ], + "spans": [ + { + "bbox": [ + 56, + 232, + 301, + 290 + ], + "type": "text", + "content": "- Fixed pinching antenna: In this scheme, " + }, + { + "bbox": [ + 56, + 232, + 301, + 290 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 56, + 232, + 301, + 290 + ], + "type": "text", + "content": " pinching antennas are uniformly spread along the dielectric waveguide, where the in-waveguide and free-space channels are determined by the fixed positions of the pinching antennas." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 292, + 301, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 292, + 301, + 387 + ], + "spans": [ + { + "bbox": [ + 56, + 292, + 301, + 387 + ], + "type": "text", + "content": "- Semi-continuous activation: In the semi-continuous activation scheme, we assume there are " + }, + { + "bbox": [ + 56, + 292, + 301, + 387 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 56, + 292, + 301, + 387 + ], + "type": "text", + "content": " pinching antennas uniformly distributed along the dielectric waveguide, which are predetermined and cannot be changed. However, the pinching antennas are allowed to be adjusted in a small-scale range to alter the phase-shift response of the pinching beamforming, which has a negligible impact on the large-scale path loss." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 45, + 389, + 300, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 389, + 300, + 556 + ], + "spans": [ + { + "bbox": [ + 45, + 389, + 300, + 556 + ], + "type": "text", + "content": "In Fig. 2, we can observe that the pinching antenna achieves the highest illumination power compared to the other baseline schemes. This result can be expected because, compared with the baseline schemes, pinching antennas can be flexibly repositioned to attenuate the large-scale path loss between the pinching antennas and the receiving ends. Thus, more spatial degrees-of-freedom (DoFs) are provided to favor the communication and sensing performance. On the other hand, although the semi-continuous activation scheme cannot reduce the path loss by adjusting the antenna position over a wide range, it exhibits superior performance to the conventional antenna scheme because pinching antennas are spread over the entire communication/sensing area, which averagely closer to the receiving ends." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 557, + 301, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 557, + 301, + 723 + ], + "spans": [ + { + "bbox": [ + 45, + 557, + 301, + 723 + ], + "type": "text", + "content": "Fig. 3 depicts the relationship between the illumination power and the number of activated pinching antennas, with a comparison of the proportional power allocation model. For fairness comparison, " + }, + { + "bbox": [ + 45, + 557, + 301, + 723 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{s}} = 0.9" + }, + { + "bbox": [ + 45, + 557, + 301, + 723 + ], + "type": "text", + "content": " for two power allocation models. As can be observed, the illumination power increases as the number of pinching antennas increases, which is because an increasing number of pinching antennas can improve the beam resolution and reduce the power leakage in irrelevant regions, thereby raising the illumination power at the target. It is also observed that the proportional power allocation is slightly inferior to the equal power allocation model, which verifies the effectiveness of the pinching antennas based on proportional power allocation model in reconfiguring signal propagation." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 724, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 724, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 724, + 301, + 750 + ], + "type": "text", + "content": "Fig. 4 investigates the impact of the rotation angle of the dielectric waveguide on illumination power at the target." + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 348, + 63, + 518, + 199 + ], + "blocks": [ + { + "bbox": [ + 348, + 63, + 518, + 199 + ], + "lines": [ + { + "bbox": [ + 348, + 63, + 518, + 199 + ], + "spans": [ + { + "bbox": [ + 348, + 63, + 518, + 199 + ], + "type": "image", + "image_path": "2b04b8b5996b5c98797a156b763cfc8aee51bdb00db00b59b8d2b415c188a3e0.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 308, + 207, + 563, + 227 + ], + "lines": [ + { + "bbox": [ + 308, + 207, + 563, + 227 + ], + "spans": [ + { + "bbox": [ + 308, + 207, + 563, + 227 + ], + "type": "text", + "content": "Fig. 4. The illumination power versus the rotation angle of the dielectric waveguide, where " + }, + { + "bbox": [ + 308, + 207, + 563, + 227 + ], + "type": "inline_equation", + "content": "P_{\\mathrm{T}} = 70" + }, + { + "bbox": [ + 308, + 207, + 563, + 227 + ], + "type": "text", + "content": " dBm." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "spans": [ + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "text", + "content": "Here, we assume the dielectric waveguide can be rotated in a clockwise direction parallel to the x-o-y plane, where the rotation angle is defined as the angle entwined by the dielectric waveguide and the x-axis. From Fig. 4, it is shown that the illumination power first increases and then decreases as the rotation angle grows. This is due to the fact that when the rotation angle is " + }, + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "inline_equation", + "content": "60^{\\circ}" + }, + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "text", + "content": ", the target is located underneath the dielectric waveguide, and it receives the maximal illumination power. As the rotation angle further rises, the distance between the target and the pinching antenna becomes large, so the illumination power gradually decreases. In addition, raising the height of the dielectric waveguide increases the average distance from the pinching antennas to the user and target, thus, the illumination power decreases as " + }, + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 307, + 232, + 564, + 399 + ], + "type": "text", + "content": " increases." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 399, + 416, + 473, + 427 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 399, + 416, + 473, + 427 + ], + "spans": [ + { + "bbox": [ + 399, + 416, + 473, + 427 + ], + "type": "text", + "content": "V. CONCLUSION" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 308, + 433, + 564, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 433, + 564, + 540 + ], + "spans": [ + { + "bbox": [ + 308, + 433, + 564, + 540 + ], + "type": "text", + "content": "A novel PASS-ISAC framework has been proposed, where the pinching beamforming was exploited to realize the simultaneous C&S transmission. A separated ISAC design was proposed for the two-waveguide PASS. A penalty-based AO algorithm was proposed to maximize the illumination power at the target while guaranteeing the QoS requirement of the communication user. Simulation results were provided to verify the superiority of the proposed PASS-ISAC framework over the other baseline schemes." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 408, + 548, + 468, + 559 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 408, + 548, + 468, + 559 + ], + "spans": [ + { + "bbox": [ + 408, + 548, + 468, + 559 + ], + "type": "text", + "content": "REFERENCES" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 314, + 567, + 564, + 747 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 315, + 567, + 564, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 567, + 564, + 594 + ], + "spans": [ + { + "bbox": [ + 315, + 567, + 564, + 594 + ], + "type": "text", + "content": "[1] L. Zhu, W. Ma, and R. Zhang, \"Movable antennas for wireless communication: Opportunities and challenges,\" IEEE Commun. Mag., vol. 62, no. 6, pp. 114-120, Jun. 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 315, + 594, + 564, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 594, + 564, + 620 + ], + "spans": [ + { + "bbox": [ + 315, + 594, + 564, + 620 + ], + "type": "text", + "content": "[2] W. K. New, K.-K. Wong et al., \"A tutorial on fluid antenna system for 6G networks: Encompassing communication theory, optimization methods and hardware designs,\" IEEE Commun. Surv. Tut., pp. 1-1, 2024." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 621, + 564, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 621, + 564, + 647 + ], + "spans": [ + { + "bbox": [ + 314, + 621, + 564, + 647 + ], + "type": "text", + "content": "[3] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 314, + 647, + 563, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 647, + 563, + 666 + ], + "spans": [ + { + "bbox": [ + 314, + 647, + 563, + 666 + ], + "type": "text", + "content": "[4] Z. Ding, R. Schober, and H. Vincent Poor, \"Flexible-antenna systems: A pinching-antenna perspective,\" IEEE Trans. Commun., pp. 1-1, 2025." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 666, + 563, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 666, + 563, + 693 + ], + "spans": [ + { + "bbox": [ + 314, + 666, + 563, + 693 + ], + "type": "text", + "content": "[5] F. Liu, Y. Cui et al., \"Integrated sensing and communications: Toward dual-functional wireless networks for 6G and beyond,\" IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1728-1767, Jun. 2022." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 693, + 563, + 720 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 693, + 563, + 720 + ], + "spans": [ + { + "bbox": [ + 314, + 693, + 563, + 720 + ], + "type": "text", + "content": "[6] Y. Liu, Z. Wang, X. Mu, C. Ouyang, X. Xu, and Z. Ding, “Pinching antenna systems (PASS): Architecture designs, opportunities, and outlook,” arXiv preprint arXiv:2501.18409, 2025." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 314, + 720, + 563, + 747 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 720, + 563, + 747 + ], + "spans": [ + { + "bbox": [ + 314, + 720, + 563, + 747 + ], + "type": "text", + "content": "[7] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 57, + 300, + 138 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 50, + 57, + 299, + 83 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 50, + 57, + 299, + 83 + ], + "spans": [ + { + "bbox": [ + 50, + 57, + 299, + 83 + ], + "type": "text", + "content": "[8] W. Hao, H. Shi et al., \"Joint beamforming design for active RIS-aided THz ISAC systems with delay alignment modulation,\" IEEE Wireless Communications Letters, vol. 12, no. 10, pp. 1816-1820, Oct. 2023." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 50, + 84, + 300, + 111 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 50, + 84, + 300, + 111 + ], + "spans": [ + { + "bbox": [ + 50, + 84, + 300, + 111 + ], + "type": "text", + "content": "[9] T. Jiang and Y. Shi, \"Over-the-air computation via intelligent reflecting surfaces,\" in Proc. IEEE Global Commun. Conf. (GLOBECOM), Waikoloa, HI, USA. Dec. 2019, pp. 1-6." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 111, + 299, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 111, + 299, + 138 + ], + "spans": [ + { + "bbox": [ + 48, + 111, + 299, + 138 + ], + "type": "text", + "content": "[10] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 20-34, May. 2010." + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "spans": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_content_list.json b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..af76f527b1e8b87b947795ffb7a0aed14b1ac86e --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_content_list.json @@ -0,0 +1,1820 @@ +[ + { + "type": "text", + "text": "$\\mathbf{SF}^2 \\mathbf{T}$ : Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding", + "text_level": 1, + "bbox": [ + 109, + 128, + 885, + 175 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yangliu Hu $^{1}$ , Zikai Song $^{1\\dagger}$ , Na Feng $^{1}$ , Yawei Luo $^{2}$ , Junqing Yu $^{1}$ , Yi-Ping Phoebe Chen $^{3}$ , Wei Yang $^{1\\dagger}$ $^{1}$ Huazhong University of Science and Technology $^{2}$ Zhejiang University $^{3}$ La Trobe University", + "bbox": [ + 101, + 202, + 890, + 243 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{huyangliu,skyesong,fengna,yjqing,weiyangcs}@hust.edu.cn", + "bbox": [ + 251, + 244, + 746, + 260 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "yaweiluo@zju.edu.cn phoebe.chen@latrobe.edu.au", + "bbox": [ + 285, + 263, + 705, + 277 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 246, + 311, + 326, + 328 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Video-based Large Language Models (Video-LLMs) have witnessed substantial advancements in recent years, propelled by the advancement in multi-modal LLMs. Although these models have demonstrated proficiency in providing the overall description of videos, they struggle with fine-grained understanding, particularly in aspects such as visual dynamics and video details inquiries. To tackle these shortcomings, we find that fine-tuning Video-LLMs on self-supervised fragment tasks, greatly improve their fine-grained video understanding abilities. Hence we propose two key contributions: (1) Self-Supervised Fragment Fine-Tuning $(SF^2 T)$ , a novel effortless fine-tuning method, employs the rich inherent characteristics of videos for training, while unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it relieves researchers from labor-intensive annotations and smartly circumvents the limitations of natural language, which often fails to capture the complex spatiotemporal variations in videos; (2) A novel benchmark dataset, namely FineVidBench, for rigorously assessing Video-LLMs' performance at both the scene and fragment levels, offering a comprehensive evaluation of their capabilities. We assessed multiple models and validated the effectiveness of $SF^2 T$ on them. Experimental results reveal that our approach improves their ability to capture and interpret spatiotemporal details.", + "bbox": [ + 88, + 359, + 485, + 736 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 763, + 220, + 779 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large Language Models (LLMs) have showcased significant emergent capabilities, such as in-context learning [19], instruction-following [23], and chain-of-thought reasoning [30], driven by expansive datasets and advanced model architectures. Extending these advancements, Video-LLMs through mechanisms like pooling or query aggregation", + "bbox": [ + 89, + 787, + 483, + 878 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg", + "image_caption": [ + "Figure 1. Performance w/ and w/o $\\mathbf{SF}^2\\mathbf{T}$ . We evaluated four advanced Video-LLMs w/ and w/o $\\mathrm{SF}^2\\mathrm{T}$ on our proposed FineVidBench with two baselines: (1) Base: performance without any fine-tuning (blue dashed), and (2) Base (SFT): performance with supervised fine-tuning (red dashed). After applying $\\mathrm{SF}^2\\mathrm{T}$ , all models showed significant improvements (solid blue and red), underscoring its broad effectiveness." + ], + "image_footnote": [], + "bbox": [ + 535, + 311, + 883, + 542 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "across numerous visual tokens, have broadened the scope of LLMs to encompass video information processing [11, 14, 35]. This evolution markedly advances their potential for in-depth real-world comprehension, opening applications in intelligent surveillance, virtual reality, and autonomous driving, further enriching the landscape of video analytics and interpretation.", + "bbox": [ + 511, + 684, + 906, + 790 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Various Video-LLMs, exemplified by GPT4-V, VideoLaMA 2 [4], MiniCPM-V [34], and Qwen2-VL [28], have been crafted by leading corporations and research institutions, demonstrating proficiency in capturing the overarching content of videos. When adapting to new videos and tasks, they predominantly rely on Supervised FineTuning (SFT) [26] or Reinforcement Learning from Hu", + "bbox": [ + 511, + 794, + 908, + 902 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07745v1 [cs.CV] 10 Apr 2025", + "bbox": [ + 22, + 262, + 60, + 705 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† Corresponding authors", + "bbox": [ + 114, + 887, + 246, + 900 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "man Feedback (RLHF) [39], both of which are heavily contingent upon extensive manual annotation. This dependence poses several key problems: (1) it necessitates substantial human resources, particularly highly trained annotators; (2) the inherent complexity of video content and task demands frequently introduces inconsistencies and subjectivity, rendering the maintenance of high-quality annotations particularly arduous; and (3) subtle temporal variations across video frames are challenging to articulate with precision, often yielding generalized descriptions that constrain the Video-LLMs' potential. Consequently, existing Video-LLMs struggle with fine-grained video understanding tasks, particularly in aspects such as visual dynamics (e.g., motion patterns, object interactions) and video details inquiries (e.g., positional changes, detail variations).", + "bbox": [ + 89, + 90, + 480, + 316 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address these challenges, we observe that finetuning Video-LLMs with self-supervised fragment tasks, by \"fragment\" we mean temporal frame level specifications of the video, could improve the model's sensitivity to spatiotemporal scene-level details (related to video contents). Driven by this, we introduce the Self-supervised Fragment Fine-Tuning $(\\mathrm{SF}^2\\mathrm{T})$ , a effortless fine-tuning strategy for Video-LLMs that help to improve the fine-grained video understanding. $\\mathrm{SF}^2\\mathrm{T}$ consists of five fragment-level tasks—Counting, Consistency Verification, Localization, Disorder Detection and Rearrangement—that automatically generate labels from various spatiotemporal perspectives. This approach maximizes the use of frame-level information while minimizing reliance on complex human instructions and annotations.", + "bbox": [ + 89, + 316, + 482, + 542 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Moreover, to evaluate the fine-grained visual dynamic perception of Video-LLMs and fully demonstrate the effectiveness of our $\\mathrm{SF}^2\\mathrm{T}$ , we present the FineVidBench, a novel benchmark. FineVidBench comprises 910 videos and 22,718 question-answer pairs, with videos sourced from diverse public datasets, including Something-Something V2 (SSv2) [6], Moments in Time (MiT) [21], etc. The question-answer pairs are auto-generated in single-choice format, incorporating distractors to increase testing difficulty. We evaluated several notable Video-LLMs developed in recent years, and find they generally fail to understand the execution sequence of actions and struggling to grasp fine-grained spatiotemporal information. While after fine-tuning with $\\mathrm{SF}^2\\mathrm{T}$ , the Video-LLMs better recognize spatiotemporal details, leading to a holistic and marked improvement in fine-grained understanding.", + "bbox": [ + 89, + 544, + 482, + 785 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 89, + 799, + 232, + 814 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Video-LLMs Finetuning Video-LLMs are primarily finetuned by adjusting the parameters of small, trainable adapters for task adaptation, without changing the entire model, saving resources and enhancing efficiency. The connective adapter (e.g., MLP/Linear Layer [15], Q", + "bbox": [ + 89, + 824, + 480, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "former [10]) links the Video Embedder and LLM, aligning video embeddings with LLM input tokens, while insertive adapters (e.g., LoRA [8]) are directly integrated into the LLM to modify its behavior. Most Video-LLMs combine both types of adapters and typically use multi-stage finetuning [4, 11, 13, 24, 35]. First, the model learns to establish relationships between images, videos, and text using large-scale multimodal datasets [1, 2, 29, 31]. In the second stage, the model is fine-tuned with an curated instruction-following dataset [11, 17, 18]. Besides, there are full finetuning, which updates all LLM parameters with a lower learning rate [25, 33], and zero-shot models, which transforms the video task into a text task, typically relying on a powerful LLM [32]. However, annotating video data remains a labor-intensive and time-consuming task, particularly for long videos or those involving complex actions.", + "bbox": [ + 511, + 90, + 903, + 330 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Benchmarks on Video-LLMs Currently, many studies [3, 5, 38] focus on evaluating the temporal perception capabilities of Video-LLMs. MVBench [12] designs 20 tasks from temporal and spatial perspectives, and Tempcompass [16] introduces 5 temporal aspects and 4 task formats. VN-Bench [36] decouples video content from the QA pairs by inserting irrelevant images or text \"needles\" into the original video. Moment-10M [22] has constructed a large-scale dataset on temporal localization tasks. However, as illustrated in Table 1, these studies often focus on gathering diverse videos or evaluating the models' performance with long videos, while somewhat neglecting the models' ability to perform fine-grained perception of temporal details. To address this gap, FineVidBench breaks videos into multiple sets of frames and generates annotations from diverse spatiotemporal perspectives, introducing novel evaluation methods for fine-grained understanding.", + "bbox": [ + 511, + 333, + 903, + 589 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/fa2958404e4aaaeb3d53d7c99de2d0fe6a0724dd0390a75cfc19c30ba10f8531.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
BenchmarksVideo num.QA num.Input ChangeTemporal DiversityFine-Grained EvaluationHierarchical Test
Video-MME9002700XXXX
TempCompass4107540XX
VN bench-1350XX
Moment-10M64.9k10.4MXXXX
AutoEval-Video327327XXXX
MV bench36414000XXX
MLVU13342593XXXX
FineVidBench91022,718
", + "bbox": [ + 516, + 599, + 903, + 752 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Table 1. Comparison with related benchmarks. Our approach offers significant advantages in input formats, evaluation methods, granularity, and temporal diversity.", + "bbox": [ + 511, + 762, + 903, + 805 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3. FineVidBench Benchmark", + "text_level": 1, + "bbox": [ + 511, + 829, + 759, + 845 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "It is broadly recognized that Video-LLMs struggle with fine-grained video understanding tasks, yet no comprehensive benchmarks exist to thoroughly investigate this issue.", + "bbox": [ + 511, + 854, + 903, + 900 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To address this gap, we introduce FineVidBench, a multidimensional, fine-grained evaluation framework specifically designed to assess and improve the overall capabilities of Video-LLMs.", + "bbox": [ + 89, + 90, + 483, + 151 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Construction", + "text_level": 1, + "bbox": [ + 89, + 162, + 230, + 178 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Data collection We selected videos from various public datasets, including SS-v2 [6], MiT [21], and Ego4D [7], with a particular emphasis on temporally-sensitive content, to focus the model on the entire video sequence rather than individual frames.", + "bbox": [ + 89, + 185, + 482, + 260 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Action categorization As shown in Figure 2, we compiled 52 actions, categorizing them into 3 types based on intraclass variance. The distribution varies significantly: \"Distinctive Actions\" $(39\\%)$ are easily recognizable, encompassing a total of 36 actions. \"Non-typical Actions\" $(57\\%)$ refer to flexible actions with no clear defining characteristics, spanning 14 types. The broad diversity and complexity in this category require more extensive video coverage to adequately capture the range of expressions and variations. \"Slight Movements\" $(4\\%)$ represent subtle actions, such as \"hold\" and \"show\", which are difficult to detect with the naked eye and constitute a small proportion.", + "bbox": [ + 89, + 261, + 482, + 443 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Data augmentation The original videos were augmented using frame interpolation and skipping techniques for speed transformation, along with a motion-salient area sampling algorithm to capture dynamic motion. This process generated speed-varied versions and multiple sets of keyframes for each video.", + "bbox": [ + 89, + 443, + 482, + 532 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Statistics With our augmentation strategy, FineVidBench includes 910 videos, 1,820 speed-variant videos, and 2,670 sets of keyframes enriched with dynamic visual information. Building on this, we generated 22,718 QA pairs from the video content through a combination of automated processes and manual review. The quality assurance process involved rigorous cross-verification, where reviewers checked each QA pair for accuracy and contextual relevance, making corrections to ensure high quality.", + "bbox": [ + 89, + 534, + 482, + 672 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Benchmarking Dimensions", + "text_level": 1, + "bbox": [ + 89, + 681, + 334, + 698 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As shown in Figure 3, FineVidBench encompasses both scene-level and fragment-level evaluations. The scene-level evaluation assesses both original and speed-adjusted videos across three dimensions: (1) Action, which evaluates the model's holistic understanding of video content. To increase difficulty, \"Visual Synonyms\" are added as distractors, requiring VideoLLM to distinguish visually similar actions with subtle differences, a challenge common in real-world scenarios. (2) Effect, which focuses on the model's comprehension of the visual changes resulting from actions. This understanding is essential for revealing object properties and interpreting complex dynamic scenes, and could significantly enhance the reasoning capabilities of Video-", + "bbox": [ + 89, + 704, + 482, + 902 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg", + "image_caption": [ + "Figure 2. We show the action semantics and their respective proportions in FineVidBench. Distinctive Action: easily recognizable actions. Non-typical Action: flexible actions with no clear characteristics, like \"put\" and \"move.\" Slight Movement: subtle actions, such as \"hold\" and \"show,\" difficult to detect with the naked eye." + ], + "image_footnote": [], + "bbox": [ + 547, + 95, + 875, + 354 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "LLMs and LLM-aided agents. (3) Speed, which tests the model's sensitivity to changes in video speed and its capability to maintain consistent understanding across varying speeds, with slow motion revealing hidden details and fast motion obscuring them. This capability is crucial for optimizing the model's performance across diverse scenarios.", + "bbox": [ + 511, + 462, + 906, + 551 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "For fragment-level evaluation, We've designed a structured evaluation format for video dynamic keyframes, employing a step-by-step inquiry framework: (1) Frame Count: Models are queried on the number of frames in sequences using dynamically refined keyframes to assess counting accuracy. (2) Meaning of Order: Understanding of sequence order is tested by asking about the first or last frames the targets appear in, or the frames they are present. e.g., \"At which frame does the target object first appear?\". (3) Frame Comparison: Two frames are randomly selected from the sequence for visual comparison, with differences varying in size but generally staying within human visual comfort limits. (4) Adjust-or-Not and Rearrangement: These two tasks involve a shuffled sequence of keyframes, and the model is asked to determine whether the order needs adjustment and, if so, how to correct it. They evaluate the model's ability to understand and restore the video's temporal sequence.", + "bbox": [ + 511, + 553, + 908, + 825 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.3. Benchmark Results", + "text_level": 1, + "bbox": [ + 511, + 833, + 702, + 848 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We evaluated six of the most advanced open-source models: LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4], Qwen2-VL[28], ShareGPT4Video [2] and", + "bbox": [ + 511, + 854, + 906, + 902 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg", + "image_caption": [ + "※ Fragment-Level Tests ※" + ], + "image_footnote": [], + "bbox": [ + 114, + 88, + 426, + 303 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "① How many frames?", + "A. 2 B. 3 C. 4 D. 5", + "(2) Which frames show the cup?", + "A. 3,4 B. 2,3,4 C. 2,3 D. 1,2,3", + "(3) Are the two frames the same?", + "A. Yes, they are exactly the same", + "B. No, they are different", + "④ Should I adjust them?", + "A. Yes, they need adjustment", + "B. No, they are in the correct order", + "⑤ Which shows the correct order?", + "A. 1234 B. 2314 C. 3142 D. 4321" + ], + "bbox": [ + 436, + 112, + 656, + 294 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg", + "image_caption": [ + "Figure 3. FineVidBench evaluates videos augmented with speed variations and fragments. Scene-level tests include the following: Action: Tests recognition accuracy amidst distractors like \"Visual Synonyms\". Effect: Assesses the model's ability to identify pre- and post-action changes. Speed: Measures the model's sensitivity to changes in video speed. Fragment-level tests, employing a step-by-step inquiry framework, focus on challenges such as Frame Count, Meaning of Order, Frame Comparison, Adjust-or-Not and Rearrangement." + ], + "image_footnote": [], + "bbox": [ + 666, + 114, + 769, + 295 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 784, + 117, + 883, + 294 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Video-CCAM [27], each employing different architectures and training strategies. Table 3 summarizes the results across the eight tasks. We discuss the results from scene-level and fragment-level.", + "bbox": [ + 89, + 396, + 483, + 457 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "- Scene-level Results and Analysis", + "text_level": 1, + "bbox": [ + 89, + 469, + 333, + 484 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Action The scores for this task varied significantly, with models trained in relevant video data—such as Video-CCAM, Qwen2-VL, and VideoLLaMA 2.1—achieving notably higher performance. However, as shown on the left side of Table 2, interference from \"Visual Synonyms\" prevented these models from achieving their full potential, resulting in declines of varying degrees and indicating difficulties in distinguishing visually similar actions.", + "bbox": [ + 89, + 489, + 483, + 609 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Effect All models exhibited average performance on this task, indicating a superficial understanding of aspects such as object attributes, object relationships, and action properties. This task tests the model's ability to grasp how actions affect objects, focusing on causal relationships and temporal reasoning—particularly for actions like \"push\" and \"pull\", which share similar execution flows. The model must distinguish them based on dynamic effects, such as changes in direction and speed, but most models perform moderately in this regard.", + "bbox": [ + 89, + 612, + 483, + 763 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Speed The results show that all models are insensitive to speed variations, likely because they were not adequately exposed to speed changes during training. Figure 4 shows that models are more sensitive to slow motion than fast playback, and struggled with identifying \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1. This may be due to the loss of coherence in fast-moving video content, while slow-motion videos highlight more distinct details, aiding the model in making accurate judgments.", + "bbox": [ + 89, + 763, + 483, + 901 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg", + "image_caption": [ + "Figure 4. Accuracy across different video speeds. All models are more sensitive to slow-speed videos and struggle to understand \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1." + ], + "image_footnote": [], + "bbox": [ + 545, + 396, + 875, + 583 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/d1cdef868477757ac87ddd2dcf9068ab8d5ac5713f613471b1f47720544113eb.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Video-LLMsActionFrame Number
w/o VSw/ VSAvg.345
LLaVA-NeXT-Video37.3135.0419.3720.3319.7717.98
MiniCPM-V 2.643.3740.1590.3293.8290.6686.44
Video-LLaMA 2.163.2653.9830.1742.8639.897.45
Qwen2-VL68.1856.6296.6597.2596.6396.05
ShareGPT4Video46.9030.8426.3360.9916.780.00
Video-CCAM73.1060.2323.4514.188.9647.61
", + "bbox": [ + 514, + 646, + 916, + 801 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 2. Left: Accuracy of the Action task with or without \"Visual Synonyms\". It is obvious that the \"Visual Synonyms\" have significantly impacted the model's judgment. Right: Accuracy of the counting task across different frame counts. Except for Video-CCAM, all other models exhibited a decline in performance as the number of frames increased.", + "bbox": [ + 511, + 811, + 906, + 893 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/73ded6200c277082dc2f10323cd0e1a1f5fb0713d2236a09f24da8bb6447951b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Video-LLMsParams.Scene-LevelFragment-LevelS-Avg.FG-Avg.A-Avg.
ActionEffectSpeedFCntMoOFCmpAoNRearr
(Random)-25.0025.0025.0025.0025.0033.3333.3325.0025.0028.3327.08
LLaVA-NeXT-Video7B37.3142.6722.3519.3724.0253.7575.4520.6734.1138.6536.95
MiniCPM-V 2.68B43.3752.5619.1390.3256.4275.6676.4918.0938.3563.4054.01
Video-LLaMA 2.17B63.2650.9219.8930.1742.2776.0189.9226.8744.6953.0549.91
Qwen2-VL7B68.1857.1424.6296.6533.3374.5390.7022.4849.9863.5458.45
ShareGPT4Video8B46.9043.8831.7626.3361.0588.4484.8023.3640.8557.1150.82
Video-CCAM9B73.1055.9031.6523.4545.6664.9590.2722.7253.5548.4750.96
", + "bbox": [ + 106, + 88, + 890, + 303 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 3. The overall performances of notable Video-LLMs on FineVidBench. FCnt: Frame Count. MoO: Meaning of Order. Fcmp: Frame Comparison. AoN: Adjust or Not. Rearr: Rearrangement. S-Avg.: the average performance of scene-level tasks; FG-Avg.: the average performance of fragment-level tasks. A-Avg.: the average performance of all tasks.", + "bbox": [ + 89, + 313, + 906, + 357 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- Fragment-level Results and Analysis", + "text_level": 1, + "bbox": [ + 89, + 382, + 364, + 398 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "(1) Frame-count accuracy varied significantly across models, with the lower-performing models likely lacking targeted training. The trend shown in the right side of Table 2, where accuracy decreases as frame count increases, highlights the models' insufficient temporal reasoning on longer sequences. (2) ShareGPT4Video and MiniCPM-V 2.6 showed better comprehension in the Meaning-of-Order task, while other models lagged, suggesting a lack of explicit focus on \"order\". (3) Most models excelled in frame comparison due to image-text alignment training. ShareGPT4Video achieved the best performance, owing to its Differential Sliding-Window Captioning (DiffSW) strategy, which emphasizes capturing the changes between frames when generating video descriptions. This also improved its Meaning-of-Order performance. (4) In the sorting task, models generally succeeded in the \"Adjust or Not\" response but performed poorly in the more complex \"Rearrangement\" task, indicating they can detect, but not correct, sequence errors.", + "bbox": [ + 88, + 402, + 485, + 691 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Self-supervised Fragment Finetuning", + "text_level": 1, + "bbox": [ + 89, + 707, + 428, + 726 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The above benchmark results show the existing Video-LLMs generally fail to tackle fine-grained video understanding tasks. Videos often contain subtle, complex changes that natural language alone fails to fully capture. The core component of Video-LLMs, LLMs, as generalized pattern recognizers, offers a promising solution. LLMs have the potential to detect and interpret intricate spatiotemporal dynamics that were previously difficult to represent. Given that these changes cannot be directly annotated, using self-supervised learning naturally becomes the solution, bypassing the bottleneck of manual annotation and significantly re", + "bbox": [ + 89, + 734, + 485, + 902 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "ducing labeling costs. Given these factors, we propose the $\\mathrm{SF^2T}$ to fine-tune Video-LLMs. While we do not expect $\\mathrm{SF^2T}$ to replace the supervised fine-tuning, instead it's an effortless complementary to SFT. Comparing $\\mathrm{SF^2T}$ with SFT, they primarily differ in data construction and content focus level, with each method aligned with distinct training objectives as shown in Figure 5.", + "bbox": [ + 511, + 382, + 908, + 491 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. SFT Tasks", + "text_level": 1, + "bbox": [ + 511, + 498, + 632, + 513 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We first review the common SFT tasks to set a baseline for comparing our $\\mathrm{SF^2T}$", + "bbox": [ + 511, + 520, + 906, + 550 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "General QA on Video Content This method focuses on understanding the main events and context of a video by directly asking questions about its content. While effective for grasping the video's key moments, it lacks finer spatiotemporal details and requires significant human effort to create standardized but constrained answers.", + "bbox": [ + 511, + 551, + 906, + 641 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Frame Description Integration This method typically samples video frames evenly, generates detailed descriptions for each, and integrates them into a cohesive but lengthy summary. While it enhances the model's understanding of continuity and micro-dynamics, it often proves incapable of capturing complex or subtle details that are beyond natural language's scope. Moreover, although frame descriptions can be generated using powerful multi-model LLMs like GPT-4o, significant human effort is still required to review the quality of the generated responses.", + "bbox": [ + 511, + 642, + 908, + 794 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. Fragment-level Tasks of $\\mathbf{SF}^2\\mathbf{T}$", + "text_level": 1, + "bbox": [ + 511, + 801, + 779, + 819 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "SFT tasks require manual annotations, and even automation annotation is labor-intensive and error-prone. To address, we introduce $\\mathrm{SF}^2\\mathrm{T}$ which generates accurate fragment-level labels accurately. $\\mathrm{SF}^2\\mathrm{T}$ comprises five tasks—Counting, Consistency Verification, Localization, Disorder Detection", + "bbox": [ + 511, + 825, + 908, + 902 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 99, + 94, + 477, + 148 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "What is the main content of the video?", + "bbox": [ + 122, + 161, + 383, + 175 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The video shows a person bowling, including their four-step approach, the smooth release of the ball down the lane, its path toward the pins, and...", + "bbox": [ + 147, + 180, + 423, + 212 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 99, + 231, + 475, + 295 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "What is the main content of the video?", + "bbox": [ + 122, + 306, + 385, + 319 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The video shows a person bowling: (Frame 1) The scene shows a bowling alley... (Frame 2) The player swing the bowling ball... (Frame 4) The bowling ball approaches the pins... (Frame 6) The bowling ball strikes the pins... (Frame 8) All the pins are down.", + "bbox": [ + 150, + 324, + 424, + 377 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 429, + 319, + 447, + 335 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg", + "image_caption": [ + "Scene-Level Tasks" + ], + "image_footnote": [], + "bbox": [ + 127, + 419, + 444, + 492 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 122, + 506, + 143, + 522 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "How many frames?", + "bbox": [ + 151, + 510, + 264, + 523 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "On which frames?", + "bbox": [ + 151, + 530, + 263, + 541 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Same frames?", + "bbox": [ + 151, + 549, + 241, + 560 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Adjust or not?", + "bbox": [ + 151, + 580, + 236, + 592 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Rearrange it.", + "bbox": [ + 151, + 599, + 230, + 609 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 285, + 507, + 370, + 523 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg", + "image_caption": [ + "Figure 5. Comparison between $\\mathrm{SF}^2\\mathrm{T}$ and SFT. SFT depends on manual and model-driven design to generate QA pairs for scene-level video understanding, $\\mathrm{SF}^2\\mathrm{T}$ , in contrast, automatically constructs training data based on pre-defined rules that cover various temporal and spatial aspects of the video. $\\mathrm{SF}^2\\mathrm{T}$ enables the model to focus on a fine-grained content analysis, and offering insights that supervised labels cannot achieve." + ], + "image_footnote": [], + "bbox": [ + 287, + 529, + 366, + 541 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 289, + 547, + 366, + 560 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 287, + 568, + 366, + 595 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 289, + 601, + 366, + 618 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 410, + 508, + 447, + 523 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "2nd", + "bbox": [ + 395, + 530, + 419, + 541 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "No", + "bbox": [ + 400, + 550, + 419, + 560 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Yes", + "bbox": [ + 388, + 580, + 419, + 590 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3412", + "bbox": [ + 387, + 599, + 419, + 609 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Fragment-Level Tasks", + "text_level": 1, + "bbox": [ + 202, + 638, + 372, + 652 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "and Rearrangement—designed to train the model to rearrange a set of out-of-order frames into their original sequence. This is a robust indicator of a modal's mastery over the visual dynamics of an action, requiring the model to detect subtle frame changes and understand the overall coherence and temporal trends. Mastery of these tasks enables the model to recognize frames and their temporal re", + "bbox": [ + 89, + 794, + 482, + 900 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "relationships, enhancing its ability to predict and reconstruct action sequences and improving performance on more complex video tasks. Our method first extracts multiple sets of dynamic keyframes from each video. These fragments capture the key dynamic information from multiple temporal perspectives, offering a more efficient representation of redundant video data. It then applies pseudo-labeling, distinguishing it from traditional video-level labeling. By designing proxy tasks that leverage intrinsic information rather than predefined prior knowledge, it smartly circumvents the annotation bottleneck, enabling a deeper temporal understanding and offering insights that traditional video-level labeling cannot achieve.", + "bbox": [ + 511, + 90, + 903, + 286 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Counting We input N frames into the Video-LLM and ask it to count them. Although this task seems straightforward, it proves challenging for current Video-LLMs, particularly as the number of frames increases, revealing a decline in accuracy. The model's inability to perform basic quantitative tasks points to a broader limitations in understanding the overall sequence integrity.", + "bbox": [ + 511, + 289, + 903, + 393 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Consistency Verification Video-LLMs are tasked with identifying two frames sampled from the same video, which may show subtle differences. This task sharpens the model's sensitivity to visual details by encouraging a thorough analysis and comparison of the images, countering its tendency to focus on primary subjects while neglecting the background and other subtle features.", + "bbox": [ + 511, + 396, + 903, + 501 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Localization Video-LLMs must accurately locate a specified target (from video metadata) within a sequence of frames, identifying the frames in which it appears, disappears, or persists. This naturally human ability is a significant challenge for these models, as they often struggle to perceive sequential relationships between frames and face additional obstacles, such as occlusion, interference from similar objects, lighting variations, and memory limitations.", + "bbox": [ + 511, + 503, + 905, + 625 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Disorder Detection and Rearrangement Video-LLMs must determine whether and how to adjust the order of a given frame sequence. When frames are randomized, the loss of spatiotemporal coherence and logical continuity makes it exceptionally challenging to reconstruct their original sequence, especially as interactions within frames become more complex [20]. This task is evaluated in two ways: the yes/no task tests the model's sensitivity to temporal consistency, while the sorting task, which leverages capabilities from the other four tasks, requires advanced reasoning and adjustments.", + "bbox": [ + 511, + 627, + 905, + 792 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Experiments", + "text_level": 1, + "bbox": [ + 511, + 813, + 643, + 829 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In this section, we fine-tuned four of the most advanced open-source Video-LLMs using the $\\mathrm{SF}^2\\mathrm{T}$ method to evaluate its effectiveness, alongside ablation studies and interpretability analyses to explore the underlying mechanisms.", + "bbox": [ + 511, + 839, + 903, + 900 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/217b75d5da1feb710205a3ea17f34a12c93a21948b855474060681bc48f62589.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodsLLaVA-NEXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VL
ActionEffectSpeedActionEffectSpeedActionEffectSpeedActionEffectSpeed
Base37.3142.6722.3543.3752.5619.1363.2650.9219.8968.1857.1424.62
Base+SF2T48.6743.7724.8365.9160.6228.6067.4257.3331.6373.8663.3731.92
Base(SFT)62.6944.6322.3577.6575.0970.8377.6565.9429.7378.6066.3030.87
Base(SFT)+SF2T63.0745.2432.0181.6376.9286.7479.7368.6831.8281.2573.2632.38
", + "bbox": [ + 133, + 88, + 867, + 239 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/887a2fb3f507996fa3a3946a0c447d7b2e1125f8a8a144abe6869e78185ea560.jpg", + "table_caption": [ + "Table 4. Performance on FineVidBench. We tested on two baselines: (1) Base: Results without any fine-tuning. (2) Base(SFT): Results after fine-tuning in supervised way. After $\\mathrm{SF}^2\\mathrm{T}$ , all models improved in all three tasks, highlighting its broad effectiveness and the value of fragment-level tasks in enhancing scene-level comprehension. Notably, $\\mathrm{SF}^2\\mathrm{T}$ outperformed SFT in the Speed task (except MiniCPM-V 2.6), highlighting the key role of fine-grained temporal understanding in distinguishing video speeds." + ], + "table_footnote": [], + "table_body": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
MVBench
Base36.8440.2354.1855.97
Base+SF2T42.9256.0257.9763.76
Video-MME(no subtitle)
Base29.7643.1749.0243.77
Base+SF2T34.8453.1951.8853.60
MLVU
Base36.3241.5852.3242.81
Base+SF2T41.9155.3256.1154.67
", + "bbox": [ + 102, + 318, + 473, + 506 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/d8920c8e1b5e723a0872e8b610991792488d583beccb64d01b2c1a9bfb280fac.jpg", + "table_caption": [ + "Table 5. Performance on public benchmarks. $\\mathrm{SF}^2\\mathrm{T}$ consistently enhances performance across all three benchmarks, reaffirming its effectiveness as a spatiotemporal enhancer." + ], + "table_footnote": [], + "table_body": "
Methodsrandomuniformkeyframemotion-salient
SF2T70.3171.6772.1173.86
", + "bbox": [ + 540, + 323, + 877, + 375 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/6aaa13d9fd9f9561511b88091ada04ad2bb2f209dcbe5e7e1c29745f1c9b4178.jpg", + "table_caption": [ + "Table 6. Impact of sampling. As shown, motion-salient area sampling outperforms others by better capturing motion fluidity and temporal details, while the other methods fail to fully utilize their potential, leading to suboptimal performance." + ], + "table_footnote": [], + "table_body": "
Methodslongshortrandom
SF2T69.3871.4073.86
", + "bbox": [ + 604, + 449, + 815, + 501 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 7. Impact of temporal span. Both long- and short-range temporal modeling reduced $\\mathrm{SF}^2\\mathrm{T}$ 's performance, emphasizing the importance of multi-scale temporal modeling.", + "bbox": [ + 509, + 512, + 906, + 555 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 584, + 307, + 599 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To ensure fairness, experiments were conducted on LoRA-compatible models, including LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4] and Qwen2-VL[28], using their default or recommended settings, with all models trained for one epoch. All experiments were performed under identical hardware conditions, utilizing NVIDIA A100 40GB GPU for computation. It should be emphasized that our goal is to validate the effectiveness of $\\mathrm{SF}^2\\mathrm{T}$ , not to optimize models for maximum performance.", + "bbox": [ + 89, + 609, + 483, + 744 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We randomly sampled videos from SSv2 and MiT for training, ensuring no overlap with the FineVidBench dataset. MGSampler [37] was used to extract N sets of M-frame sequences from each video, capturing dynamic changes while preserving overall characteristics. M is chosen based on the video's characteristics to capture content flow, while N is determined by content complexity, with more complex content requiring a larger N to cover more temporal perspectives. In this study, we set $\\mathrm{N} = 3$ and M between 3 and 5, though these values may vary for other", + "bbox": [ + 89, + 750, + 483, + 901 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "datasets. We then generated QA pairs for each frame sequence based on the five tasks defined in $\\mathrm{SF}^2\\mathrm{T}$ for training. Evaluations were performed on FineVidBench's scene-level tasks, including Action, Effect, and Speed. To compare with traditional SFT, we also generated and manually reviewed QA pairs for these videos in a supervised setting.", + "bbox": [ + 511, + 584, + 906, + 676 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.2. Comparisons", + "text_level": 1, + "bbox": [ + 511, + 694, + 653, + 710 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4 summarizes the results of the scene-level tasks. After $\\mathrm{SF}^2\\mathrm{T}$ training, all models showed significant improvement, emphasizing that fragment-level tasks can notably enhance scene-level comprehension. Integrating $\\mathrm{SF}^2\\mathrm{T}$ with SFT is also leads to performance gains, demonstrating that fragment-level training positively impacts SFT and enhances its effectiveness. Surprisingly, in the Speed task, many base models outperformed SFT after applying $\\mathrm{SF}^2\\mathrm{T}$ , highlighting the importance of fine-grained temporal understanding in distinguishing video speeds. This improvement likely stems from $\\mathrm{SF}^2\\mathrm{T}$ 's ability to enhance the model's sensitivity to temporal cues, such as the loss or enhancement of", + "bbox": [ + 511, + 719, + 908, + 902 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "information during acceleration or deceleration, as well as content coherence—all crucial for speed judgment. As expected, $\\mathrm{SF}^2\\mathrm{T}$ currently lags behind SFT, since its training objective is not fully aligned with scene-level tasks. However, we do not expect $\\mathrm{SF}^2\\mathrm{T}$ to replace supervised finetuning; rather, our experiments suggest that it can serve as an effortless and effective complement to SFT.", + "bbox": [ + 89, + 90, + 480, + 196 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In addition to FineVidBench, we evaluated $\\mathrm{SF}^2\\mathrm{T}$ on three public video understanding benchmarks (Table 5). The results demonstrate consistent improvements across various video tasks, validating $\\mathrm{SF}^2\\mathrm{T}$ as an effective spatiotemporal enhancer for a wide range of video understanding tasks. All models were tested with an 8-frame input.", + "bbox": [ + 89, + 198, + 480, + 289 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.3. Ablation and Interpretability Analyses", + "text_level": 1, + "bbox": [ + 89, + 301, + 421, + 316 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We evaluated the impact of frame sampling strategies on $\\mathrm{SF}^2\\mathrm{T}$ , as each method provides a unique \"temporal information perspective\" that influencing video understanding performance. As shown in Table 6, we assessed four strategies on Qwen2-VL in the Action task: random, uniform interval, keyframe, and motion-salient area sampling [37]. Motion-salient area sampling performed best, likely due to its ability to capture continuous motion dynamics, thereby enhancing the model's understanding of action fluidity and temporal detail. In comparison, the other methods had limitations: keyframe sampling misses intermediate action phases, fixed-interval sampling may overlook critical moments, and random sampling lacks temporal consistency. Notably, different datasets may favor different strategies. For example, some datasets may perform better with uniform interval sampling, or their motion features may align better with the model's specific capabilities.", + "bbox": [ + 89, + 324, + 482, + 580 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We examined the effects of long- and short-range temporal modeling on $\\mathrm{SF}^2\\mathrm{T}$ . In the Consistency Verification task, we constrained the random selection of frame pairs to adjacent frames for local continuity or non-adjacent frames to capture long-range dependencies. As shown in Table 7, both settings decreased $\\mathrm{SF}^2\\mathrm{T}$ 's performance on the Action task of Qwen2-VL, indicating that an overemphasis on either long- or short-range information leads to temporal imbalance and incomplete dynamics. This underscores the importance of combining both approaches to leverage their broader temporal span and frame variations for a more comprehensive feature representation.", + "bbox": [ + 89, + 582, + 482, + 763 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We analyzed the attention map of Qwen2-VL on the Action task, particularly in cases where the model's predictions were corrected after $\\mathrm{SF}^2\\mathrm{T}$ . As shown in Figure 6, we found that $\\mathrm{SF}^2\\mathrm{T}$ enhances the model's ability to capture fine-grained spatial changes and temporal dynamics. (1) Spatial Aspects. After $\\mathrm{SF}^2\\mathrm{T}$ , the model shows increased attention to action execution areas, particularly the hands and objects they interact with. It shows better sensitivity to small targets, likely due to the Consistency Verification", + "bbox": [ + 89, + 763, + 482, + 900 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg", + "image_caption": [ + "Figure 6. Two exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - $\\mathrm{SF^2T}$ applied. As shown by the red boxes, after applying $\\mathrm{SF^2T}$ , the model better focuses on action execution areas and interacting objects. The $\\mathrm{SF^2T}$ fine-tuned model has the ability to predict the direction of motion, as seen in the trajectories of the red bottle and Cheerios." + ], + "image_footnote": [], + "bbox": [ + 535, + 88, + 883, + 450 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "task, which enhances spatial perception by refining sensitivity to subtle image differences. (2) Temporal Aspects. After $\\mathrm{SF}^2\\mathrm{T}$ , we observed that the model can predict object movement trajectories in certain actions, indicating an advanced level of temporal understanding. This ability likely stems from the sorting task, which strengthens the model's comprehension of action flows and movement patterns.", + "bbox": [ + 511, + 595, + 906, + 703 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 733, + 633, + 750 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we propose $\\mathrm{SF}^2\\mathrm{T}$ to overcome the limitations of Video-LLMs in fine-grained video understanding. $\\mathrm{SF}^2\\mathrm{T}$ is an innovative fine-tuning method that eliminates the need for labor-intensive annotations and effectively bypasses the constraints of natural language descriptions. Additionally, we introduce FineVidBench, a benchmark for evaluating Video-LLMs at both scene and fragment levels. In the future, we plan to expand our dataset with larger videos and more tasks to increase its impact.", + "bbox": [ + 511, + 763, + 906, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 91, + 90, + 250, + 107 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is supported by the National Key Research and Development Program of China (No.2020YBF2901202), National Natural Science Foundation of China (NSFC No. 62272184 and No. 62402189), the China Postdoctoral Science Foundation under Grant Number GZC20230894, the China Postdoctoral Science Foundation (Certificate Number: 2024M751012), and the Postdoctor Project of Hubei Province under Grant Number 2024HBBHCXB014, and the \"Pioneer\" and \"Leading Goose\" R&D Program of Zhejiang (No. 2024C01161). The computation is completed in the HPC Platform of Huazhong University of Science and Technology.", + "bbox": [ + 89, + 114, + 485, + 297 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 310, + 187, + 325 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] FirstName Alpher. Frobnication. IEEE TPAMI, 12(1):234-778, 2002. 2", + "[2] Lin Chen, Xilin Wei, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Bin Lin, Zhenyu Tang, et al. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325, 2024. 2, 3", + "[3] Xiuyuan Chen, Yuan Lin, Yuchen Zhang, and Weiran Huang. Autoeval-video: An automatic benchmark for assessing large vision language models in open-ended video question answering. arXiv preprint arXiv:2311.14906, 2023. 2", + "[4] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 1, 2, 3, 7", + "[5] Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024. 2", + "[6] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pages 5842-5850, 2017. 2, 3", + "[7] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 3", + "[8] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2", + "[9] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-last-interleave:" + ], + "bbox": [ + 99, + 335, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 3, 7", + "[10] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2", + "[11] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023. 1, 2", + "[12] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 2", + "[13] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043, 2023. 2", + "[14] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. In European Conference on Computer Vision, pages 323–340. Springer, 2025. 1", + "[15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2", + "[16] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 2", + "[17] Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093, 2023. 2", + "[18] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. 2", + "[19] Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 1, 2020. 1", + "[20] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 527-544. Springer, 2016. 6", + "[21] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence, 42(2):502-508, 2019. 2, 3" + ], + "bbox": [ + 516, + 92, + 905, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[22] Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, TatSeng Chua, Yueting Zhuang, and Siliang Tang. Momentor: Advancing video large language model with fine-grained temporal reasoning. arXiv preprint arXiv:2402.11435, 2024. 2", + "[23] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 1", + "[24] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 2", + "[25] Fangxun Shu, Lei Zhang, Hao Jiang, and Cihang Xie. Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720, 2023. 2", + "[26] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464-7473, 2019. 1", + "[27] TencentQQ Multimedia Research Team. Video-cam: Advancing video-language understanding with causal cross-attention masks. https://github.com/QQ-MM/Video-CCAM, 2024.4", + "[28] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 1, 3, 7", + "[29] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Intervid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 2", + "[30] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. 1", + "[31] Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, et al. Youku-mplug: A 10 million large-scale chinese video-language dataset for pre-training and benchmarks. arXiv preprint arXiv:2306.04362, 2023. 2", + "[32] Mingze Xu, Mingfei Gao, Zhe Gan, Hong-You Chen, Zhengfeng Lai, Haiming Gang, Kai Kang, and Afshin Dehghan. Slowfast-llava: A strong training-free baseline for video large language models. arXiv preprint arXiv:2407.15841, 2024. 2", + "[33] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2" + ], + "bbox": [ + 91, + 92, + 482, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[34] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024. 1, 3, 7", + "[35] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. 1, 2", + "[36] Zijia Zhao, Haoyu Lu, Yuqi Huo, Yifan Du, Tongtian Yue, Longteng Guo, Bingning Wang, Weipeng Chen, and Jing Liu. Needle in a video haystack: A scalable synthetic framework for benchmarking video mllms. arXiv preprint arXiv:2406.09367, 2024. 2", + "[37] Yuan Zhi, Zhan Tong, Limin Wang, and Gangshan Wu. Mgsampler: An explainable sampling strategy for video action recognition. In Proceedings of the IEEE/CVF International conference on Computer Vision, pages 1513-1522, 2021. 7, 8", + "[38] Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264, 2024. 2", + "[39] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 2" + ], + "bbox": [ + 516, + 92, + 903, + 458 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "$\\mathbf{SF^{2}T}$ : Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding", + "text_level": 1, + "bbox": [ + 109, + 85, + 885, + 132 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Supplementary Material", + "bbox": [ + 382, + 143, + 612, + 165 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "In this supplementary material, Section A presents $\\mathrm{SF^2T}$ 's performance on video caption tasks and additional exemplary visualizations of the attention map, while Section B provides more details about FineVidBench.", + "bbox": [ + 89, + 181, + 482, + 244 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A. More Results and Cases", + "text_level": 1, + "bbox": [ + 89, + 255, + 320, + 272 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "In addition to FineVidBench and public video understanding benchmarks, we also evaluated the video caption task (Table 1) using GPT-4o mini, assessing fluency, relevance, informativeness, and correctness, with a maximum score of 40. The results show that incorporating $\\mathrm{SF^2T}$ improves performance, highlighting that fine-grained understanding also benefits video captioning. However, after fine-tuning, MiniCPM-V 2.6 produced shorter responses, leading to a decrease in its informativeness score.", + "bbox": [ + 89, + 281, + 482, + 417 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/e649ab7b72444c37363694726d639ac3bbdb25a6eedefd741ef6f75f8da50a71.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
Base33.2032.6122.5329.76
Base+SF2T33.2929.73 ↓30.9930.05
Base(SFT)27.6229.6027.1929.66
Base(SFT)+SF2T30.5031.3128.9431.04
", + "bbox": [ + 91, + 425, + 486, + 512 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 1. Performance on video caption task. The results show that incorporating $\\mathrm{SF^2T}$ yields higher scores (except MiniCPM-V 2.6), likely due to its enhanced temporal sensitivity and understanding.", + "bbox": [ + 89, + 523, + 482, + 566 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "As shown in Figure 1, we present more attention maps for Qwen2-VL on the Action task, focusing on cases where the model's predictions were corrected after applying $\\mathrm{SF}^2\\mathrm{T}$ .", + "bbox": [ + 89, + 584, + 482, + 630 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "B. Details of FinevidBench", + "text_level": 1, + "bbox": [ + 89, + 642, + 316, + 657 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "B.1. Question-Answer Templates", + "text_level": 1, + "bbox": [ + 89, + 667, + 346, + 683 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 2 delineates the question templates for each task. For the answers, Scene-level tasks include Action task, which are composed of the \"visual synonyms\" and other verbs; Effect task, which are scripted by researchers based on video content; and Speed task, which offer fixed options: fast, slow, normal, and no speed. Fragment-level tasks encompass Frame Count, with answers ranging from 2 to 6; Meaning of Order, using ordinal numbers as responses; Frame Comparison and Adjust or Not, with responses of Yes, No, and Not sure; and Rearrangement, where the answer is a permutation of N numbers, with N representing the number of input frames. The Question-Answer database is generated through a process of template creation followed by iterative refinement using GPT-4. For Action and Effect tasks,", + "bbox": [ + 89, + 688, + 482, + 900 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "each original video is queried three times using different question formulations. For Speed tasks, one query is conducted for both the original and the speed-altered versions of the video. For Fragment-Level tasks, all five questions are posed for each unique frame count.", + "bbox": [ + 511, + 181, + 906, + 258 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "B.2. Detailed Results", + "text_level": 1, + "bbox": [ + 511, + 266, + 678, + 282 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "- Scene Level", + "text_level": 1, + "bbox": [ + 511, + 289, + 612, + 303 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 3 illustrates the types of action effects and examples in the Effect tasks. For the affected objects, common physical attributes and quantities of objects are considered; notably, the positional relationship, spatial distance, and similarity between two objects are examined. Regarding action attributes, the intensity and completeness of the action are evaluated. Special actions include slight movement, multiple-object movements where several affected objects undergo motion, and compound movements involving two or more atomic actions linked in time. Additionally, camera movements and the inclination of the surface on which objects move are assessed. Table 4 presents the results categorized under the Effect classification. Overall, models performed well in Physical Attributes and Action Intensity, likely due to the ability to infer such information by comparing images before and after the action occurs. However, models exhibited subpar performance in Action Completion and Camera Motion. The former suggests a lack of understanding regarding the distinction between completed and incomplete actions in terms of their effects, while the latter is attributable to the inherent variability and complexity of camera movements. For other tasks, the majority of models exhibited moderate performance.", + "bbox": [ + 511, + 306, + 906, + 655 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "- Fragment Level", + "text_level": 1, + "bbox": [ + 511, + 662, + 640, + 678 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Table 5 presents the results for all tasks in the fragment level under varying input frame counts. From the results, we can observe that except for Video-CCAM, the models' ability to count frames significantly declines as the frame count increases. Regarding the understanding of order concepts, most models show a clear upward trend, except for ShareGPT4Video. Models generally perform well on the frame comparison task, likely due to extensive training with image-text pairs. Since the input consistently involves two frames, the results show no significant variation, as expected. For Rearrangement, all results hover around random values, suggesting that while models recognize incorrect sequence orders, they cannot correct them, indicating a failure to grasp the dynamic processes of videos truly.", + "bbox": [ + 511, + 681, + 906, + 893 + ], + "page_idx": 10 + }, + { + "type": "image", + "img_path": "images/23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg", + "image_caption": [ + "Figure 1. Four exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - $\\mathrm{SF^2T}$ applied. As highlighted by the red boxes, applying $\\mathrm{SF^2T}$ enables the model to better focus on action execution areas and interacting objects, while also predicting the direction of motion." + ], + "image_footnote": [], + "bbox": [ + 133, + 90, + 862, + 440 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/cd470606075ce8039139134a6a30f3dfda262ecce420c30962c766eb0017936c.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TasksQuestion
Scene LevelActionWhich activity can be seen in the video?
EffectAfter the action takes place, what changes occur to the object?
During the process of the action, what changes occur to the object?
After the action takes place, what changes occur in the field of vision?
SpeedWhat is the rate of movement in the video?
Fragment LevelFrame CountCould you please tell me how many frames I have inputted?
Meaning of OrderIn the sequence of frames provided, on which frame does the object first appear?
In the sequence of frames provided, on which frame does the object last appear?
In the sequence of frames provided, in which frames does the object exist?
Frame ComparisonAre the two frames I provided exactly the same?
Adjust or NotThese frames are all from the same video and capture the dynamic process of an action. The order of these frames may have been mixed up. Do we need to rearrange them to match the normal execution sequence of the action?
RearrangementThese frames are all from the same video and depict the dynamic process of an action. The order of these frames may have been mixed up. Based on the connections between the image frames, which of the following options represents the most appropriate sequence?
", + "bbox": [ + 91, + 505, + 916, + 857 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Table 2. Question templates authored by researchers undergo revision by GPT-4o, which rephrases them to maintain the original intent while introducing varied sentence structures and vocabulary.", + "bbox": [ + 89, + 867, + 906, + 895 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/0319bb62a6c7c45a00954b86bf7d0f3bcf0e06eb20112b16245d801ed8821d52.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Effect TypeExamples
Object PropertiesPhysical PropertiesWhat modifications occur to the wafer stick as a result of the action? \nA. Not sure B. Nothing happened C. It broke D. It deformed
QuantityOnce the action occurs, what changes are made to the mugs? \nA. There are about 5 or 6 mugs here B. There are about 1 or 2 mugs here \nC. There are about 3 or 4 mugs here D. Not sure
Object RelationshipsPositionWhat adjustments take place in the egg following the action? \nA. An object appeared on top of it B. An object appeared in front of it \nC. An object appeared inside it D. An object appeared behind it
DistanceWhat changes happen to the chili and the cucumber after the action is performed? \nA. They grew more distant B. It's unclear \nC. They came nearer D. Their separation remained consistent
SimilarityWhat adjustments take place in the box following the action? \nA. One thing appeared above it \nB. Several things appeared above it, and they looked different from each other \nC. Not sure \nD. Several things appeared above it, and they looked similar to each other
Action PropertiesIntensityWhat alterations are observed in the paper cups after the action is taken? \nA. Not sure B. It collapsed C. It broke D. It remained standing
CompletionAfter the action is done, what modifications occur to the onion? \nA. It appears unchanged from how it was initially \nB. Something was visible at the back of it \nC. An item appeared on its surface \nD. Something was detected below it
Special ActionsSlight MovementWhat adjustments take place in the shower pouf during the action? \nA. I'm uncertain B. It dropped to the ground C. It was nearly at rest D. It ascended
Mutiple-ObjectWhat happens to the two chargers while the action is executed? \nA. They crossed paths B. They impacted each other \nC. They proceeded in the same direction D. It's unclear
CompoundDuring the process of action, what modifications are observed in the plate? \nA. It fell after leaving the hand and did not come back \nB. It was continuously held without any separation \nC. It was detached from the hand but later reattached \nD. Unclear
OthersCamera movementWhat alterations are evident in the flower while the action is carried out? \nA. It appeared to move to the right in view B. It appeared to ascend in view \nC. It appeared to move to the left in view D. I can't determine
Surface InclinationAfter the action is taken, what changes are noticed in the cup? \nA. It was stationary on a tilted surface B. It was stationary on a horizontal surface \nC. Not sure D. It rolled down a sloped surface
", + "bbox": [ + 99, + 87, + 901, + 886 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Table 3. Types of Effect Task", + "bbox": [ + 410, + 897, + 586, + 910 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/1c017519b4dd297ab87f91be6c92044ca1ad34f27731bb71dfa53be4193d82a8.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Effect Type (Random: 25.00)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4-VideoVideo-CCAMAvg.
Object PropertiesPhysical Properties44.2049.2852.1760.8747.5463.4852.92
Quantity33.3347.6256.1958.1041.9060.9549.68
Object RelationshipsPosition41.0351.2849.2354.3640.3150.3647.76
Distance39.5646.6740.8940.4440.4448.4442.74
Similarity42.8649.5247.6252.3838.1059.0548.25
Action PropertiesIntensity40.2750.6753.3361.3352.5362.1353.38
Completion39.3143.6838.8535.6348.0534.0239.92
Special ActionsSlight Movement47.9243.7541.6772.9235.4254.5849.38
Multiple-Object50.0060.6776.6766.6740.6758.6758.89
Compound48.1544.4451.1152.5935.5653.3347.53
OthersCamera Movement33.3322.2228.8926.6732.2228.8928.70
Surface Inclination28.5749.5258.5760.4841.4351.4348.33
", + "bbox": [ + 107, + 112, + 890, + 420 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/62e0cea972697992ac6e19803b67f81b78c8f447611fcd93268e33d68991c90f.jpg", + "table_caption": [ + "Table 4. The results of the Effect task, dissected into more granular categories. Overall, Qwen2-VL achieved the best results, with Video-CCAM closely following. Notably, models exhibit suboptimal performance in distinguishing completed from incomplete actions, indicating a lack of ability to associate actions with the resulting state changes of objects." + ], + "table_footnote": [], + "table_body": "
Input(Random)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4VideoVideo-CCAM
3q125.0020.3393.8242.8697.2560.9914.18
q225.0019.2348.9035.7129.1276.1538.35
q333.3346.9680.6671.2771.8288.4166.34
q433.3369.2365.3881.5480.0075.5580.06
q525.0023.8523.0833.0827.6923.6823.36
4q125.0019.7790.6639.8996.6316.788.96
q225.0024.1660.6741.0133.1565.4243.65
q333.3358.7678.5376.8477.4087.2363.63
q433.3374.4279.8593.8095.3587.5094.46
q525.0019.3814.7324.8120.9323.1022.94
5q125.0017.9886.447.4596.050.0047.61
q225.0028.8159.8950.2837.8541.0055.24
q333.3355.6867.6180.1174.4389.6964.83
q433.3382.8184.3894.5396.8891.5596.49
q525.0018.7516.4122.6618.7523.2923.92
", + "bbox": [ + 96, + 531, + 901, + 830 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Table 5. The results of all tasks in Fragment-Level under varying input frame counts. Questions q1 through q5 correspond to Frame Count, Meaning of Order, Frame Comparison, Adjust or Not, and Rearrangement, respectively.", + "bbox": [ + 89, + 844, + 906, + 873 + ], + "page_idx": 13 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_model.json b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_model.json new file mode 100644 index 0000000000000000000000000000000000000000..90d2f4c3deabf6a709d09efb89ca5b81354847dc --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_model.json @@ -0,0 +1,2373 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.263, + 0.061, + 0.706 + ], + "angle": 270, + "content": "arXiv:2504.07745v1 [cs.CV] 10 Apr 2025" + }, + { + "type": "title", + "bbox": [ + 0.11, + 0.13, + 0.887, + 0.176 + ], + "angle": 0, + "content": "\\(\\mathbf{SF}^2 \\mathbf{T}\\): Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.203, + 0.892, + 0.244 + ], + "angle": 0, + "content": "Yangliu Hu\\(^{1}\\), Zikai Song\\(^{1\\dagger}\\), Na Feng\\(^{1}\\), Yawei Luo\\(^{2}\\), Junqing Yu\\(^{1}\\), Yi-Ping Phoebe Chen\\(^{3}\\), Wei Yang\\(^{1\\dagger}\\) \n\\(^{1}\\)Huazhong University of Science and Technology \\(^{2}\\)Zhejiang University \\(^{3}\\)La Trobe University" + }, + { + "type": "text", + "bbox": [ + 0.253, + 0.246, + 0.747, + 0.261 + ], + "angle": 0, + "content": "{huyangliu,skyesong,fengna,yjqing,weiyangcs}@hust.edu.cn" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.264, + 0.707, + 0.278 + ], + "angle": 0, + "content": "yaweiluo@zju.edu.cn phoebe.chen@latrobe.edu.au" + }, + { + "type": "title", + "bbox": [ + 0.248, + 0.313, + 0.327, + 0.329 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.36, + 0.486, + 0.737 + ], + "angle": 0, + "content": "Video-based Large Language Models (Video-LLMs) have witnessed substantial advancements in recent years, propelled by the advancement in multi-modal LLMs. Although these models have demonstrated proficiency in providing the overall description of videos, they struggle with fine-grained understanding, particularly in aspects such as visual dynamics and video details inquiries. To tackle these shortcomings, we find that fine-tuning Video-LLMs on self-supervised fragment tasks, greatly improve their fine-grained video understanding abilities. Hence we propose two key contributions: (1) Self-Supervised Fragment Fine-Tuning \\((SF^2 T)\\), a novel effortless fine-tuning method, employs the rich inherent characteristics of videos for training, while unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it relieves researchers from labor-intensive annotations and smartly circumvents the limitations of natural language, which often fails to capture the complex spatiotemporal variations in videos; (2) A novel benchmark dataset, namely FineVidBench, for rigorously assessing Video-LLMs' performance at both the scene and fragment levels, offering a comprehensive evaluation of their capabilities. We assessed multiple models and validated the effectiveness of \\(SF^2 T\\) on them. Experimental results reveal that our approach improves their ability to capture and interpret spatiotemporal details." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.764, + 0.222, + 0.78 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.789, + 0.484, + 0.88 + ], + "angle": 0, + "content": "Large Language Models (LLMs) have showcased significant emergent capabilities, such as in-context learning [19], instruction-following [23], and chain-of-thought reasoning [30], driven by expansive datasets and advanced model architectures. Extending these advancements, Video-LLMs through mechanisms like pooling or query aggregation" + }, + { + "type": "image", + "bbox": [ + 0.537, + 0.313, + 0.885, + 0.543 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.553, + 0.908, + 0.651 + ], + "angle": 0, + "content": "Figure 1. Performance w/ and w/o \\(\\mathbf{SF}^2\\mathbf{T}\\). We evaluated four advanced Video-LLMs w/ and w/o \\(\\mathrm{SF}^2\\mathrm{T}\\) on our proposed FineVidBench with two baselines: (1) Base: performance without any fine-tuning (blue dashed), and (2) Base (SFT): performance with supervised fine-tuning (red dashed). After applying \\(\\mathrm{SF}^2\\mathrm{T}\\), all models showed significant improvements (solid blue and red), underscoring its broad effectiveness." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.685, + 0.907, + 0.791 + ], + "angle": 0, + "content": "across numerous visual tokens, have broadened the scope of LLMs to encompass video information processing [11, 14, 35]. This evolution markedly advances their potential for in-depth real-world comprehension, opening applications in intelligent surveillance, virtual reality, and autonomous driving, further enriching the landscape of video analytics and interpretation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Various Video-LLMs, exemplified by GPT4-V, VideoLaMA 2 [4], MiniCPM-V [34], and Qwen2-VL [28], have been crafted by leading corporations and research institutions, demonstrating proficiency in capturing the overarching content of videos. When adapting to new videos and tasks, they predominantly rely on Supervised FineTuning (SFT) [26] or Reinforcement Learning from Hu" + }, + { + "type": "page_footnote", + "bbox": [ + 0.115, + 0.888, + 0.248, + 0.901 + ], + "angle": 0, + "content": "† Corresponding authors" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.318 + ], + "angle": 0, + "content": "man Feedback (RLHF) [39], both of which are heavily contingent upon extensive manual annotation. This dependence poses several key problems: (1) it necessitates substantial human resources, particularly highly trained annotators; (2) the inherent complexity of video content and task demands frequently introduces inconsistencies and subjectivity, rendering the maintenance of high-quality annotations particularly arduous; and (3) subtle temporal variations across video frames are challenging to articulate with precision, often yielding generalized descriptions that constrain the Video-LLMs' potential. Consequently, existing Video-LLMs struggle with fine-grained video understanding tasks, particularly in aspects such as visual dynamics (e.g., motion patterns, object interactions) and video details inquiries (e.g., positional changes, detail variations)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.318, + 0.483, + 0.544 + ], + "angle": 0, + "content": "To address these challenges, we observe that finetuning Video-LLMs with self-supervised fragment tasks, by \"fragment\" we mean temporal frame level specifications of the video, could improve the model's sensitivity to spatiotemporal scene-level details (related to video contents). Driven by this, we introduce the Self-supervised Fragment Fine-Tuning \\((\\mathrm{SF}^2\\mathrm{T})\\), a effortless fine-tuning strategy for Video-LLMs that help to improve the fine-grained video understanding. \\(\\mathrm{SF}^2\\mathrm{T}\\) consists of five fragment-level tasks—Counting, Consistency Verification, Localization, Disorder Detection and Rearrangement—that automatically generate labels from various spatiotemporal perspectives. This approach maximizes the use of frame-level information while minimizing reliance on complex human instructions and annotations." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.545, + 0.483, + 0.786 + ], + "angle": 0, + "content": "Moreover, to evaluate the fine-grained visual dynamic perception of Video-LLMs and fully demonstrate the effectiveness of our \\(\\mathrm{SF}^2\\mathrm{T}\\), we present the FineVidBench, a novel benchmark. FineVidBench comprises 910 videos and 22,718 question-answer pairs, with videos sourced from diverse public datasets, including Something-Something V2 (SSv2) [6], Moments in Time (MiT) [21], etc. The question-answer pairs are auto-generated in single-choice format, incorporating distractors to increase testing difficulty. We evaluated several notable Video-LLMs developed in recent years, and find they generally fail to understand the execution sequence of actions and struggling to grasp fine-grained spatiotemporal information. While after fine-tuning with \\(\\mathrm{SF}^2\\mathrm{T}\\), the Video-LLMs better recognize spatiotemporal details, leading to a holistic and marked improvement in fine-grained understanding." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.8, + 0.233, + 0.815 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.482, + 0.902 + ], + "angle": 0, + "content": "Video-LLMs Finetuning Video-LLMs are primarily finetuned by adjusting the parameters of small, trainable adapters for task adaptation, without changing the entire model, saving resources and enhancing efficiency. The connective adapter (e.g., MLP/Linear Layer [15], Q" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.332 + ], + "angle": 0, + "content": "former [10]) links the Video Embedder and LLM, aligning video embeddings with LLM input tokens, while insertive adapters (e.g., LoRA [8]) are directly integrated into the LLM to modify its behavior. Most Video-LLMs combine both types of adapters and typically use multi-stage finetuning [4, 11, 13, 24, 35]. First, the model learns to establish relationships between images, videos, and text using large-scale multimodal datasets [1, 2, 29, 31]. In the second stage, the model is fine-tuned with an curated instruction-following dataset [11, 17, 18]. Besides, there are full finetuning, which updates all LLM parameters with a lower learning rate [25, 33], and zero-shot models, which transforms the video task into a text task, typically relying on a powerful LLM [32]. However, annotating video data remains a labor-intensive and time-consuming task, particularly for long videos or those involving complex actions." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.334, + 0.905, + 0.59 + ], + "angle": 0, + "content": "Benchmarks on Video-LLMs Currently, many studies [3, 5, 38] focus on evaluating the temporal perception capabilities of Video-LLMs. MVBench [12] designs 20 tasks from temporal and spatial perspectives, and Tempcompass [16] introduces 5 temporal aspects and 4 task formats. VN-Bench [36] decouples video content from the QA pairs by inserting irrelevant images or text \"needles\" into the original video. Moment-10M [22] has constructed a large-scale dataset on temporal localization tasks. However, as illustrated in Table 1, these studies often focus on gathering diverse videos or evaluating the models' performance with long videos, while somewhat neglecting the models' ability to perform fine-grained perception of temporal details. To address this gap, FineVidBench breaks videos into multiple sets of frames and generates annotations from diverse spatiotemporal perspectives, introducing novel evaluation methods for fine-grained understanding." + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.6, + 0.905, + 0.753 + ], + "angle": 0, + "content": "
BenchmarksVideo num.QA num.Input ChangeTemporal DiversityFine-Grained EvaluationHierarchical Test
Video-MME9002700XXXX
TempCompass4107540XX
VN bench-1350XX
Moment-10M64.9k10.4MXXXX
AutoEval-Video327327XXXX
MV bench36414000XXX
MLVU13342593XXXX
FineVidBench91022,718
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.763, + 0.905, + 0.806 + ], + "angle": 0, + "content": "Table 1. Comparison with related benchmarks. Our approach offers significant advantages in input formats, evaluation methods, granularity, and temporal diversity." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.83, + 0.761, + 0.846 + ], + "angle": 0, + "content": "3. FineVidBench Benchmark" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.905, + 0.901 + ], + "angle": 0, + "content": "It is broadly recognized that Video-LLMs struggle with fine-grained video understanding tasks, yet no comprehensive benchmarks exist to thoroughly investigate this issue." + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.152 + ], + "angle": 0, + "content": "To address this gap, we introduce FineVidBench, a multidimensional, fine-grained evaluation framework specifically designed to assess and improve the overall capabilities of Video-LLMs." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.164, + 0.231, + 0.179 + ], + "angle": 0, + "content": "3.1. Construction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.186, + 0.483, + 0.261 + ], + "angle": 0, + "content": "Data collection We selected videos from various public datasets, including SS-v2 [6], MiT [21], and Ego4D [7], with a particular emphasis on temporally-sensitive content, to focus the model on the entire video sequence rather than individual frames." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.262, + 0.483, + 0.444 + ], + "angle": 0, + "content": "Action categorization As shown in Figure 2, we compiled 52 actions, categorizing them into 3 types based on intraclass variance. The distribution varies significantly: \"Distinctive Actions\" \\((39\\%)\\) are easily recognizable, encompassing a total of 36 actions. \"Non-typical Actions\" \\((57\\%)\\) refer to flexible actions with no clear defining characteristics, spanning 14 types. The broad diversity and complexity in this category require more extensive video coverage to adequately capture the range of expressions and variations. \"Slight Movements\" \\((4\\%)\\) represent subtle actions, such as \"hold\" and \"show\", which are difficult to detect with the naked eye and constitute a small proportion." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.444, + 0.483, + 0.534 + ], + "angle": 0, + "content": "Data augmentation The original videos were augmented using frame interpolation and skipping techniques for speed transformation, along with a motion-salient area sampling algorithm to capture dynamic motion. This process generated speed-varied versions and multiple sets of keyframes for each video." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.535, + 0.483, + 0.673 + ], + "angle": 0, + "content": "Statistics With our augmentation strategy, FineVidBench includes 910 videos, 1,820 speed-variant videos, and 2,670 sets of keyframes enriched with dynamic visual information. Building on this, we generated 22,718 QA pairs from the video content through a combination of automated processes and manual review. The quality assurance process involved rigorous cross-verification, where reviewers checked each QA pair for accuracy and contextual relevance, making corrections to ensure high quality." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.682, + 0.336, + 0.699 + ], + "angle": 0, + "content": "3.2. Benchmarking Dimensions" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.705, + 0.483, + 0.903 + ], + "angle": 0, + "content": "As shown in Figure 3, FineVidBench encompasses both scene-level and fragment-level evaluations. The scene-level evaluation assesses both original and speed-adjusted videos across three dimensions: (1) Action, which evaluates the model's holistic understanding of video content. To increase difficulty, \"Visual Synonyms\" are added as distractors, requiring VideoLLM to distinguish visually similar actions with subtle differences, a challenge common in real-world scenarios. (2) Effect, which focuses on the model's comprehension of the visual changes resulting from actions. This understanding is essential for revealing object properties and interpreting complex dynamic scenes, and could significantly enhance the reasoning capabilities of Video-" + }, + { + "type": "image", + "bbox": [ + 0.548, + 0.096, + 0.876, + 0.355 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.368, + 0.908, + 0.44 + ], + "angle": 0, + "content": "Figure 2. We show the action semantics and their respective proportions in FineVidBench. Distinctive Action: easily recognizable actions. Non-typical Action: flexible actions with no clear characteristics, like \"put\" and \"move.\" Slight Movement: subtle actions, such as \"hold\" and \"show,\" difficult to detect with the naked eye." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.463, + 0.907, + 0.553 + ], + "angle": 0, + "content": "LLMs and LLM-aided agents. (3) Speed, which tests the model's sensitivity to changes in video speed and its capability to maintain consistent understanding across varying speeds, with slow motion revealing hidden details and fast motion obscuring them. This capability is crucial for optimizing the model's performance across diverse scenarios." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.554, + 0.909, + 0.827 + ], + "angle": 0, + "content": "For fragment-level evaluation, We've designed a structured evaluation format for video dynamic keyframes, employing a step-by-step inquiry framework: (1) Frame Count: Models are queried on the number of frames in sequences using dynamically refined keyframes to assess counting accuracy. (2) Meaning of Order: Understanding of sequence order is tested by asking about the first or last frames the targets appear in, or the frames they are present. e.g., \"At which frame does the target object first appear?\". (3) Frame Comparison: Two frames are randomly selected from the sequence for visual comparison, with differences varying in size but generally staying within human visual comfort limits. (4) Adjust-or-Not and Rearrangement: These two tasks involve a shuffled sequence of keyframes, and the model is asked to determine whether the order needs adjustment and, if so, how to correct it. They evaluate the model's ability to understand and restore the video's temporal sequence." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.834, + 0.704, + 0.849 + ], + "angle": 0, + "content": "3.3. Benchmark Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.908, + 0.903 + ], + "angle": 0, + "content": "We evaluated six of the most advanced open-source models: LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4], Qwen2-VL[28], ShareGPT4Video [2] and" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.115, + 0.089, + 0.427, + 0.304 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.437, + 0.089, + 0.651, + 0.105 + ], + "angle": 0, + "content": "※ Fragment-Level Tests ※" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.113, + 0.582, + 0.125 + ], + "angle": 0, + "content": "① How many frames?" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.129, + 0.564, + 0.139 + ], + "angle": 0, + "content": "A. 2 B. 3 C. 4 D. 5" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.146, + 0.64, + 0.159 + ], + "angle": 0, + "content": "(2) Which frames show the cup?" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.163, + 0.622, + 0.175 + ], + "angle": 0, + "content": "A. 3,4 B. 2,3,4 C. 2,3 D. 1,2,3" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.18, + 0.645, + 0.192 + ], + "angle": 0, + "content": "(3) Are the two frames the same?" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.195, + 0.613, + 0.206 + ], + "angle": 0, + "content": "A. Yes, they are exactly the same" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.209, + 0.572, + 0.221 + ], + "angle": 0, + "content": "B. No, they are different" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.226, + 0.596, + 0.239 + ], + "angle": 0, + "content": "④ Should I adjust them?" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.241, + 0.602, + 0.252 + ], + "angle": 0, + "content": "A. Yes, they need adjustment" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.254, + 0.619, + 0.264 + ], + "angle": 0, + "content": "B. No, they are in the correct order" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.269, + 0.657, + 0.282 + ], + "angle": 0, + "content": "⑤ Which shows the correct order?" + }, + { + "type": "text", + "bbox": [ + 0.437, + 0.285, + 0.627, + 0.295 + ], + "angle": 0, + "content": "A. 1234 B. 2314 C. 3142 D. 4321" + }, + { + "type": "list", + "bbox": [ + 0.437, + 0.113, + 0.657, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.668, + 0.115, + 0.77, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.785, + 0.118, + 0.885, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.314, + 0.907, + 0.371 + ], + "angle": 0, + "content": "Figure 3. FineVidBench evaluates videos augmented with speed variations and fragments. Scene-level tests include the following: Action: Tests recognition accuracy amidst distractors like \"Visual Synonyms\". Effect: Assesses the model's ability to identify pre- and post-action changes. Speed: Measures the model's sensitivity to changes in video speed. Fragment-level tests, employing a step-by-step inquiry framework, focus on challenges such as Frame Count, Meaning of Order, Frame Comparison, Adjust-or-Not and Rearrangement." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.397, + 0.484, + 0.458 + ], + "angle": 0, + "content": "Video-CCAM [27], each employing different architectures and training strategies. Table 3 summarizes the results across the eight tasks. We discuss the results from scene-level and fragment-level." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.47, + 0.334, + 0.485 + ], + "angle": 0, + "content": "- Scene-level Results and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.49, + 0.484, + 0.611 + ], + "angle": 0, + "content": "Action The scores for this task varied significantly, with models trained in relevant video data—such as Video-CCAM, Qwen2-VL, and VideoLLaMA 2.1—achieving notably higher performance. However, as shown on the left side of Table 2, interference from \"Visual Synonyms\" prevented these models from achieving their full potential, resulting in declines of varying degrees and indicating difficulties in distinguishing visually similar actions." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.613, + 0.484, + 0.764 + ], + "angle": 0, + "content": "Effect All models exhibited average performance on this task, indicating a superficial understanding of aspects such as object attributes, object relationships, and action properties. This task tests the model's ability to grasp how actions affect objects, focusing on causal relationships and temporal reasoning—particularly for actions like \"push\" and \"pull\", which share similar execution flows. The model must distinguish them based on dynamic effects, such as changes in direction and speed, but most models perform moderately in this regard." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Speed The results show that all models are insensitive to speed variations, likely because they were not adequately exposed to speed changes during training. Figure 4 shows that models are more sensitive to slow motion than fast playback, and struggled with identifying \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1. This may be due to the loss of coherence in fast-moving video content, while slow-motion videos highlight more distinct details, aiding the model in making accurate judgments." + }, + { + "type": "image", + "bbox": [ + 0.546, + 0.397, + 0.877, + 0.584 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.594, + 0.907, + 0.637 + ], + "angle": 0, + "content": "Figure 4. Accuracy across different video speeds. All models are more sensitive to slow-speed videos and struggle to understand \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1." + }, + { + "type": "table", + "bbox": [ + 0.515, + 0.647, + 0.918, + 0.802 + ], + "angle": 0, + "content": "
Video-LLMsActionFrame Number
w/o VSw/ VSAvg.345
LLaVA-NeXT-Video37.3135.0419.3720.3319.7717.98
MiniCPM-V 2.643.3740.1590.3293.8290.6686.44
Video-LLaMA 2.163.2653.9830.1742.8639.897.45
Qwen2-VL68.1856.6296.6597.2596.6396.05
ShareGPT4Video46.9030.8426.3360.9916.780.00
Video-CCAM73.1060.2323.4514.188.9647.61
" + }, + { + "type": "table_caption", + "bbox": [ + 0.512, + 0.812, + 0.907, + 0.895 + ], + "angle": 0, + "content": "Table 2. Left: Accuracy of the Action task with or without \"Visual Synonyms\". It is obvious that the \"Visual Synonyms\" have significantly impacted the model's judgment. Right: Accuracy of the counting task across different frame counts. Except for Video-CCAM, all other models exhibited a decline in performance as the number of frames increased." + } + ], + [ + { + "type": "table", + "bbox": [ + 0.107, + 0.089, + 0.891, + 0.304 + ], + "angle": 0, + "content": "
Video-LLMsParams.Scene-LevelFragment-LevelS-Avg.FG-Avg.A-Avg.
ActionEffectSpeedFCntMoOFCmpAoNRearr
(Random)-25.0025.0025.0025.0025.0033.3333.3325.0025.0028.3327.08
LLaVA-NeXT-Video7B37.3142.6722.3519.3724.0253.7575.4520.6734.1138.6536.95
MiniCPM-V 2.68B43.3752.5619.1390.3256.4275.6676.4918.0938.3563.4054.01
Video-LLaMA 2.17B63.2650.9219.8930.1742.2776.0189.9226.8744.6953.0549.91
Qwen2-VL7B68.1857.1424.6296.6533.3374.5390.7022.4849.9863.5458.45
ShareGPT4Video8B46.9043.8831.7626.3361.0588.4484.8023.3640.8557.1150.82
Video-CCAM9B73.1055.9031.6523.4545.6664.9590.2722.7253.5548.4750.96
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.314, + 0.908, + 0.358 + ], + "angle": 0, + "content": "Table 3. The overall performances of notable Video-LLMs on FineVidBench. FCnt: Frame Count. MoO: Meaning of Order. Fcmp: Frame Comparison. AoN: Adjust or Not. Rearr: Rearrangement. S-Avg.: the average performance of scene-level tasks; FG-Avg.: the average performance of fragment-level tasks. A-Avg.: the average performance of all tasks." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.383, + 0.365, + 0.399 + ], + "angle": 0, + "content": "- Fragment-level Results and Analysis" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.403, + 0.486, + 0.692 + ], + "angle": 0, + "content": "(1) Frame-count accuracy varied significantly across models, with the lower-performing models likely lacking targeted training. The trend shown in the right side of Table 2, where accuracy decreases as frame count increases, highlights the models' insufficient temporal reasoning on longer sequences. (2) ShareGPT4Video and MiniCPM-V 2.6 showed better comprehension in the Meaning-of-Order task, while other models lagged, suggesting a lack of explicit focus on \"order\". (3) Most models excelled in frame comparison due to image-text alignment training. ShareGPT4Video achieved the best performance, owing to its Differential Sliding-Window Captioning (DiffSW) strategy, which emphasizes capturing the changes between frames when generating video descriptions. This also improved its Meaning-of-Order performance. (4) In the sorting task, models generally succeeded in the \"Adjust or Not\" response but performed poorly in the more complex \"Rearrangement\" task, indicating they can detect, but not correct, sequence errors." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.708, + 0.429, + 0.727 + ], + "angle": 0, + "content": "4. Self-supervised Fragment Finetuning" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.735, + 0.486, + 0.903 + ], + "angle": 0, + "content": "The above benchmark results show the existing Video-LLMs generally fail to tackle fine-grained video understanding tasks. Videos often contain subtle, complex changes that natural language alone fails to fully capture. The core component of Video-LLMs, LLMs, as generalized pattern recognizers, offers a promising solution. LLMs have the potential to detect and interpret intricate spatiotemporal dynamics that were previously difficult to represent. Given that these changes cannot be directly annotated, using self-supervised learning naturally becomes the solution, bypassing the bottleneck of manual annotation and significantly re" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.383, + 0.909, + 0.492 + ], + "angle": 0, + "content": "ducing labeling costs. Given these factors, we propose the \\(\\mathrm{SF^2T}\\) to fine-tune Video-LLMs. While we do not expect \\(\\mathrm{SF^2T}\\) to replace the supervised fine-tuning, instead it's an effortless complementary to SFT. Comparing \\(\\mathrm{SF^2T}\\) with SFT, they primarily differ in data construction and content focus level, with each method aligned with distinct training objectives as shown in Figure 5." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.499, + 0.633, + 0.515 + ], + "angle": 0, + "content": "4.1. SFT Tasks" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.521, + 0.907, + 0.551 + ], + "angle": 0, + "content": "We first review the common SFT tasks to set a baseline for comparing our \\(\\mathrm{SF^2T}\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.552, + 0.907, + 0.642 + ], + "angle": 0, + "content": "General QA on Video Content This method focuses on understanding the main events and context of a video by directly asking questions about its content. While effective for grasping the video's key moments, it lacks finer spatiotemporal details and requires significant human effort to create standardized but constrained answers." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.643, + 0.909, + 0.795 + ], + "angle": 0, + "content": "Frame Description Integration This method typically samples video frames evenly, generates detailed descriptions for each, and integrates them into a cohesive but lengthy summary. While it enhances the model's understanding of continuity and micro-dynamics, it often proves incapable of capturing complex or subtle details that are beyond natural language's scope. Moreover, although frame descriptions can be generated using powerful multi-model LLMs like GPT-4o, significant human effort is still required to review the quality of the generated responses." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.803, + 0.78, + 0.82 + ], + "angle": 0, + "content": "4.2. Fragment-level Tasks of \\(\\mathbf{SF}^2\\mathbf{T}\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.909, + 0.903 + ], + "angle": 0, + "content": "SFT tasks require manual annotations, and even automation annotation is labor-intensive and error-prone. To address, we introduce \\(\\mathrm{SF}^2\\mathrm{T}\\) which generates accurate fragment-level labels accurately. \\(\\mathrm{SF}^2\\mathrm{T}\\) comprises five tasks—Counting, Consistency Verification, Localization, Disorder Detection" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.1, + 0.095, + 0.478, + 0.15 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.123, + 0.162, + 0.384, + 0.176 + ], + "angle": 0, + "content": "What is the main content of the video?" + }, + { + "type": "text", + "bbox": [ + 0.148, + 0.181, + 0.424, + 0.213 + ], + "angle": 0, + "content": "The video shows a person bowling, including their four-step approach, the smooth release of the ball down the lane, its path toward the pins, and..." + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.232, + 0.476, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.123, + 0.307, + 0.386, + 0.32 + ], + "angle": 0, + "content": "What is the main content of the video?" + }, + { + "type": "text", + "bbox": [ + 0.151, + 0.325, + 0.425, + 0.378 + ], + "angle": 0, + "content": "The video shows a person bowling: (Frame 1) The scene shows a bowling alley... (Frame 2) The player swing the bowling ball... (Frame 4) The bowling ball approaches the pins... (Frame 6) The bowling ball strikes the pins... (Frame 8) All the pins are down." + }, + { + "type": "image", + "bbox": [ + 0.43, + 0.32, + 0.449, + 0.337 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.215, + 0.395, + 0.361, + 0.408 + ], + "angle": 0, + "content": "Scene-Level Tasks" + }, + { + "type": "image", + "bbox": [ + 0.128, + 0.42, + 0.445, + 0.493 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.124, + 0.507, + 0.144, + 0.523 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.152, + 0.511, + 0.266, + 0.524 + ], + "angle": 0, + "content": "How many frames?" + }, + { + "type": "text", + "bbox": [ + 0.152, + 0.531, + 0.264, + 0.542 + ], + "angle": 0, + "content": "On which frames?" + }, + { + "type": "text", + "bbox": [ + 0.152, + 0.55, + 0.242, + 0.561 + ], + "angle": 0, + "content": "Same frames?" + }, + { + "type": "text", + "bbox": [ + 0.152, + 0.581, + 0.237, + 0.593 + ], + "angle": 0, + "content": "Adjust or not?" + }, + { + "type": "text", + "bbox": [ + 0.152, + 0.6, + 0.231, + 0.611 + ], + "angle": 0, + "content": "Rearrange it." + }, + { + "type": "image", + "bbox": [ + 0.286, + 0.508, + 0.371, + 0.524 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.289, + 0.53, + 0.367, + 0.542 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.29, + 0.549, + 0.367, + 0.561 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.289, + 0.569, + 0.367, + 0.596 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.29, + 0.602, + 0.367, + 0.619 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.411, + 0.51, + 0.449, + 0.524 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.396, + 0.531, + 0.421, + 0.542 + ], + "angle": 0, + "content": "2nd" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.551, + 0.421, + 0.561 + ], + "angle": 0, + "content": "No" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.581, + 0.421, + 0.592 + ], + "angle": 0, + "content": "Yes" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.6, + 0.421, + 0.61 + ], + "angle": 0, + "content": "3412" + }, + { + "type": "title", + "bbox": [ + 0.203, + 0.64, + 0.373, + 0.653 + ], + "angle": 0, + "content": "Fragment-Level Tasks" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.664, + 0.483, + 0.761 + ], + "angle": 0, + "content": "Figure 5. Comparison between \\(\\mathrm{SF}^2\\mathrm{T}\\) and SFT. SFT depends on manual and model-driven design to generate QA pairs for scene-level video understanding, \\(\\mathrm{SF}^2\\mathrm{T}\\), in contrast, automatically constructs training data based on pre-defined rules that cover various temporal and spatial aspects of the video. \\(\\mathrm{SF}^2\\mathrm{T}\\) enables the model to focus on a fine-grained content analysis, and offering insights that supervised labels cannot achieve." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.483, + 0.901 + ], + "angle": 0, + "content": "and Rearrangement—designed to train the model to rearrange a set of out-of-order frames into their original sequence. This is a robust indicator of a modal's mastery over the visual dynamics of an action, requiring the model to detect subtle frame changes and understand the overall coherence and temporal trends. Mastery of these tasks enables the model to recognize frames and their temporal re" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.287 + ], + "angle": 0, + "content": "relationships, enhancing its ability to predict and reconstruct action sequences and improving performance on more complex video tasks. Our method first extracts multiple sets of dynamic keyframes from each video. These fragments capture the key dynamic information from multiple temporal perspectives, offering a more efficient representation of redundant video data. It then applies pseudo-labeling, distinguishing it from traditional video-level labeling. By designing proxy tasks that leverage intrinsic information rather than predefined prior knowledge, it smartly circumvents the annotation bottleneck, enabling a deeper temporal understanding and offering insights that traditional video-level labeling cannot achieve." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.29, + 0.905, + 0.395 + ], + "angle": 0, + "content": "Counting We input N frames into the Video-LLM and ask it to count them. Although this task seems straightforward, it proves challenging for current Video-LLMs, particularly as the number of frames increases, revealing a decline in accuracy. The model's inability to perform basic quantitative tasks points to a broader limitations in understanding the overall sequence integrity." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.397, + 0.905, + 0.502 + ], + "angle": 0, + "content": "Consistency Verification Video-LLMs are tasked with identifying two frames sampled from the same video, which may show subtle differences. This task sharpens the model's sensitivity to visual details by encouraging a thorough analysis and comparison of the images, countering its tendency to focus on primary subjects while neglecting the background and other subtle features." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.505, + 0.906, + 0.626 + ], + "angle": 0, + "content": "Localization Video-LLMs must accurately locate a specified target (from video metadata) within a sequence of frames, identifying the frames in which it appears, disappears, or persists. This naturally human ability is a significant challenge for these models, as they often struggle to perceive sequential relationships between frames and face additional obstacles, such as occlusion, interference from similar objects, lighting variations, and memory limitations." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.628, + 0.906, + 0.794 + ], + "angle": 0, + "content": "Disorder Detection and Rearrangement Video-LLMs must determine whether and how to adjust the order of a given frame sequence. When frames are randomized, the loss of spatiotemporal coherence and logical continuity makes it exceptionally challenging to reconstruct their original sequence, especially as interactions within frames become more complex [20]. This task is evaluated in two ways: the yes/no task tests the model's sensitivity to temporal consistency, while the sorting task, which leverages capabilities from the other four tasks, requires advanced reasoning and adjustments." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.814, + 0.645, + 0.83 + ], + "angle": 0, + "content": "5. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.905, + 0.901 + ], + "angle": 0, + "content": "In this section, we fine-tuned four of the most advanced open-source Video-LLMs using the \\(\\mathrm{SF}^2\\mathrm{T}\\) method to evaluate its effectiveness, alongside ablation studies and interpretability analyses to explore the underlying mechanisms." + } + ], + [ + { + "type": "table", + "bbox": [ + 0.134, + 0.089, + 0.868, + 0.24 + ], + "angle": 0, + "content": "
MethodsLLaVA-NEXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VL
ActionEffectSpeedActionEffectSpeedActionEffectSpeedActionEffectSpeed
Base37.3142.6722.3543.3752.5619.1363.2650.9219.8968.1857.1424.62
Base+SF2T48.6743.7724.8365.9160.6228.6067.4257.3331.6373.8663.3731.92
Base(SFT)62.6944.6322.3577.6575.0970.8377.6565.9429.7378.6066.3030.87
Base(SFT)+SF2T63.0745.2432.0181.6376.9286.7479.7368.6831.8281.2573.2632.38
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.25, + 0.908, + 0.307 + ], + "angle": 0, + "content": "Table 4. Performance on FineVidBench. We tested on two baselines: (1) Base: Results without any fine-tuning. (2) Base(SFT): Results after fine-tuning in supervised way. After \\(\\mathrm{SF}^2\\mathrm{T}\\), all models improved in all three tasks, highlighting its broad effectiveness and the value of fragment-level tasks in enhancing scene-level comprehension. Notably, \\(\\mathrm{SF}^2\\mathrm{T}\\) outperformed SFT in the Speed task (except MiniCPM-V 2.6), highlighting the key role of fine-grained temporal understanding in distinguishing video speeds." + }, + { + "type": "table", + "bbox": [ + 0.104, + 0.319, + 0.475, + 0.507 + ], + "angle": 0, + "content": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
MVBench
Base36.8440.2354.1855.97
Base+SF2T42.9256.0257.9763.76
Video-MME(no subtitle)
Base29.7643.1749.0243.77
Base+SF2T34.8453.1951.8853.60
MLVU
Base36.3241.5852.3242.81
Base+SF2T41.9155.3256.1154.67
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.516, + 0.486, + 0.56 + ], + "angle": 0, + "content": "Table 5. Performance on public benchmarks. \\(\\mathrm{SF}^2\\mathrm{T}\\) consistently enhances performance across all three benchmarks, reaffirming its effectiveness as a spatiotemporal enhancer." + }, + { + "type": "table", + "bbox": [ + 0.542, + 0.324, + 0.878, + 0.376 + ], + "angle": 0, + "content": "
Methodsrandomuniformkeyframemotion-salient
SF2T70.3171.6772.1173.86
" + }, + { + "type": "table_caption", + "bbox": [ + 0.51, + 0.387, + 0.907, + 0.443 + ], + "angle": 0, + "content": "Table 6. Impact of sampling. As shown, motion-salient area sampling outperforms others by better capturing motion fluidity and temporal details, while the other methods fail to fully utilize their potential, leading to suboptimal performance." + }, + { + "type": "table", + "bbox": [ + 0.606, + 0.45, + 0.816, + 0.502 + ], + "angle": 0, + "content": "
Methodslongshortrandom
SF2T69.3871.4073.86
" + }, + { + "type": "table_caption", + "bbox": [ + 0.51, + 0.513, + 0.907, + 0.556 + ], + "angle": 0, + "content": "Table 7. Impact of temporal span. Both long- and short-range temporal modeling reduced \\(\\mathrm{SF}^2\\mathrm{T}\\) 's performance, emphasizing the importance of multi-scale temporal modeling." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.585, + 0.308, + 0.601 + ], + "angle": 0, + "content": "5.1. Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.61, + 0.484, + 0.746 + ], + "angle": 0, + "content": "To ensure fairness, experiments were conducted on LoRA-compatible models, including LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4] and Qwen2-VL[28], using their default or recommended settings, with all models trained for one epoch. All experiments were performed under identical hardware conditions, utilizing NVIDIA A100 40GB GPU for computation. It should be emphasized that our goal is to validate the effectiveness of \\(\\mathrm{SF}^2\\mathrm{T}\\), not to optimize models for maximum performance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.75, + 0.484, + 0.902 + ], + "angle": 0, + "content": "We randomly sampled videos from SSv2 and MiT for training, ensuring no overlap with the FineVidBench dataset. MGSampler [37] was used to extract N sets of M-frame sequences from each video, capturing dynamic changes while preserving overall characteristics. M is chosen based on the video's characteristics to capture content flow, while N is determined by content complexity, with more complex content requiring a larger N to cover more temporal perspectives. In this study, we set \\( \\mathrm{N} = 3 \\) and M between 3 and 5, though these values may vary for other" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.585, + 0.907, + 0.677 + ], + "angle": 0, + "content": "datasets. We then generated QA pairs for each frame sequence based on the five tasks defined in \\(\\mathrm{SF}^2\\mathrm{T}\\) for training. Evaluations were performed on FineVidBench's scene-level tasks, including Action, Effect, and Speed. To compare with traditional SFT, we also generated and manually reviewed QA pairs for these videos in a supervised setting." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.695, + 0.655, + 0.712 + ], + "angle": 0, + "content": "5.2. Comparisons" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Table 4 summarizes the results of the scene-level tasks. After \\(\\mathrm{SF}^2\\mathrm{T}\\) training, all models showed significant improvement, emphasizing that fragment-level tasks can notably enhance scene-level comprehension. Integrating \\(\\mathrm{SF}^2\\mathrm{T}\\) with SFT is also leads to performance gains, demonstrating that fragment-level training positively impacts SFT and enhances its effectiveness. Surprisingly, in the Speed task, many base models outperformed SFT after applying \\(\\mathrm{SF}^2\\mathrm{T}\\), highlighting the importance of fine-grained temporal understanding in distinguishing video speeds. This improvement likely stems from \\(\\mathrm{SF}^2\\mathrm{T}\\)'s ability to enhance the model's sensitivity to temporal cues, such as the loss or enhancement of" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.198 + ], + "angle": 0, + "content": "information during acceleration or deceleration, as well as content coherence—all crucial for speed judgment. As expected, \\(\\mathrm{SF}^2\\mathrm{T}\\) currently lags behind SFT, since its training objective is not fully aligned with scene-level tasks. However, we do not expect \\(\\mathrm{SF}^2\\mathrm{T}\\) to replace supervised finetuning; rather, our experiments suggest that it can serve as an effortless and effective complement to SFT." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.199, + 0.482, + 0.29 + ], + "angle": 0, + "content": "In addition to FineVidBench, we evaluated \\(\\mathrm{SF}^2\\mathrm{T}\\) on three public video understanding benchmarks (Table 5). The results demonstrate consistent improvements across various video tasks, validating \\(\\mathrm{SF}^2\\mathrm{T}\\) as an effective spatiotemporal enhancer for a wide range of video understanding tasks. All models were tested with an 8-frame input." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.302, + 0.423, + 0.318 + ], + "angle": 0, + "content": "5.3. Ablation and Interpretability Analyses" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.325, + 0.483, + 0.581 + ], + "angle": 0, + "content": "We evaluated the impact of frame sampling strategies on \\(\\mathrm{SF}^2\\mathrm{T}\\), as each method provides a unique \"temporal information perspective\" that influencing video understanding performance. As shown in Table 6, we assessed four strategies on Qwen2-VL in the Action task: random, uniform interval, keyframe, and motion-salient area sampling [37]. Motion-salient area sampling performed best, likely due to its ability to capture continuous motion dynamics, thereby enhancing the model's understanding of action fluidity and temporal detail. In comparison, the other methods had limitations: keyframe sampling misses intermediate action phases, fixed-interval sampling may overlook critical moments, and random sampling lacks temporal consistency. Notably, different datasets may favor different strategies. For example, some datasets may perform better with uniform interval sampling, or their motion features may align better with the model's specific capabilities." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.583, + 0.483, + 0.764 + ], + "angle": 0, + "content": "We examined the effects of long- and short-range temporal modeling on \\(\\mathrm{SF}^2\\mathrm{T}\\). In the Consistency Verification task, we constrained the random selection of frame pairs to adjacent frames for local continuity or non-adjacent frames to capture long-range dependencies. As shown in Table 7, both settings decreased \\(\\mathrm{SF}^2\\mathrm{T}\\)'s performance on the Action task of Qwen2-VL, indicating that an overemphasis on either long- or short-range information leads to temporal imbalance and incomplete dynamics. This underscores the importance of combining both approaches to leverage their broader temporal span and frame variations for a more comprehensive feature representation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.483, + 0.901 + ], + "angle": 0, + "content": "We analyzed the attention map of Qwen2-VL on the Action task, particularly in cases where the model's predictions were corrected after \\(\\mathrm{SF}^2\\mathrm{T}\\). As shown in Figure 6, we found that \\(\\mathrm{SF}^2\\mathrm{T}\\) enhances the model's ability to capture fine-grained spatial changes and temporal dynamics. (1) Spatial Aspects. After \\(\\mathrm{SF}^2\\mathrm{T}\\), the model shows increased attention to action execution areas, particularly the hands and objects they interact with. It shows better sensitivity to small targets, likely due to the Consistency Verification" + }, + { + "type": "image", + "bbox": [ + 0.536, + 0.089, + 0.885, + 0.451 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.513, + 0.461, + 0.907, + 0.557 + ], + "angle": 0, + "content": "Figure 6. Two exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - \\(\\mathrm{SF^2T}\\) applied. As shown by the red boxes, after applying \\(\\mathrm{SF^2T}\\), the model better focuses on action execution areas and interacting objects. The \\(\\mathrm{SF^2T}\\) fine-tuned model has the ability to predict the direction of motion, as seen in the trajectories of the red bottle and Cheerios." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.597, + 0.907, + 0.704 + ], + "angle": 0, + "content": "task, which enhances spatial perception by refining sensitivity to subtle image differences. (2) Temporal Aspects. After \\(\\mathrm{SF}^2\\mathrm{T}\\), we observed that the model can predict object movement trajectories in certain actions, indicating an advanced level of temporal understanding. This ability likely stems from the sorting task, which strengthens the model's comprehension of action flows and movement patterns." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.734, + 0.634, + 0.75 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.765, + 0.907, + 0.901 + ], + "angle": 0, + "content": "In this work, we propose \\(\\mathrm{SF}^2\\mathrm{T}\\) to overcome the limitations of Video-LLMs in fine-grained video understanding. \\(\\mathrm{SF}^2\\mathrm{T}\\) is an innovative fine-tuning method that eliminates the need for labor-intensive annotations and effectively bypasses the constraints of natural language descriptions. Additionally, we introduce FineVidBench, a benchmark for evaluating Video-LLMs at both scene and fragment levels. In the future, we plan to expand our dataset with larger videos and more tasks to increase its impact." + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.091, + 0.251, + 0.108 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.115, + 0.486, + 0.298 + ], + "angle": 0, + "content": "This work is supported by the National Key Research and Development Program of China (No.2020YBF2901202), National Natural Science Foundation of China (NSFC No. 62272184 and No. 62402189), the China Postdoctoral Science Foundation under Grant Number GZC20230894, the China Postdoctoral Science Foundation (Certificate Number: 2024M751012), and the Postdoctor Project of Hubei Province under Grant Number 2024HBBHCXB014, and the \"Pioneer\" and \"Leading Goose\" R&D Program of Zhejiang (No. 2024C01161). The computation is completed in the HPC Platform of Huazhong University of Science and Technology." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.311, + 0.188, + 0.327 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.336, + 0.484, + 0.363 + ], + "angle": 0, + "content": "[1] FirstName Alpher. Frobnication. IEEE TPAMI, 12(1):234-778, 2002. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.366, + 0.484, + 0.434 + ], + "angle": 0, + "content": "[2] Lin Chen, Xilin Wei, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Bin Lin, Zhenyu Tang, et al. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.436, + 0.484, + 0.492 + ], + "angle": 0, + "content": "[3] Xiuyuan Chen, Yuan Lin, Yuchen Zhang, and Weiran Huang. Autoeval-video: An automatic benchmark for assessing large vision language models in open-ended video question answering. arXiv preprint arXiv:2311.14906, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.493, + 0.484, + 0.561 + ], + "angle": 0, + "content": "[4] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 1, 2, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.564, + 0.484, + 0.632 + ], + "angle": 0, + "content": "[5] Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.634, + 0.484, + 0.731 + ], + "angle": 0, + "content": "[6] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pages 5842-5850, 2017. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.733, + 0.484, + 0.815 + ], + "angle": 0, + "content": "[7] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.817, + 0.484, + 0.872 + ], + "angle": 0, + "content": "[8] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.874, + 0.484, + 0.901 + ], + "angle": 0, + "content": "[9] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-last-interleave:" + }, + { + "type": "list", + "bbox": [ + 0.101, + 0.336, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.906, + 0.121 + ], + "angle": 0, + "content": "Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.124, + 0.906, + 0.191 + ], + "angle": 0, + "content": "[10] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.195, + 0.906, + 0.25 + ], + "angle": 0, + "content": "[11] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.254, + 0.906, + 0.335 + ], + "angle": 0, + "content": "[12] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.339, + 0.906, + 0.38 + ], + "angle": 0, + "content": "[13] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.384, + 0.906, + 0.438 + ], + "angle": 0, + "content": "[14] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. In European Conference on Computer Vision, pages 323–340. Springer, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.442, + 0.906, + 0.483 + ], + "angle": 0, + "content": "[15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.486, + 0.906, + 0.541 + ], + "angle": 0, + "content": "[16] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.544, + 0.906, + 0.612 + ], + "angle": 0, + "content": "[17] Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.616, + 0.906, + 0.671 + ], + "angle": 0, + "content": "[18] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.674, + 0.906, + 0.73 + ], + "angle": 0, + "content": "[19] Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 1, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.733, + 0.906, + 0.813 + ], + "angle": 0, + "content": "[20] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 527-544. Springer, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.818, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[21] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence, 42(2):502-508, 2019. 2, 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.906, + 0.901 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.16 + ], + "angle": 0, + "content": "[22] Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, TatSeng Chua, Yueting Zhuang, and Siliang Tang. Momentor: Advancing video large language model with fine-grained temporal reasoning. arXiv preprint arXiv:2402.11435, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.163, + 0.482, + 0.232 + ], + "angle": 0, + "content": "[23] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.233, + 0.482, + 0.302 + ], + "angle": 0, + "content": "[24] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.303, + 0.482, + 0.342 + ], + "angle": 0, + "content": "[25] Fangxun Shu, Lei Zhang, Hao Jiang, and Cihang Xie. Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.344, + 0.483, + 0.413 + ], + "angle": 0, + "content": "[26] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464-7473, 2019. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.414, + 0.482, + 0.468 + ], + "angle": 0, + "content": "[27] TencentQQ Multimedia Research Team. Video-cam: Advancing video-language understanding with causal cross-attention masks. https://github.com/QQ-MM/Video-CCAM, 2024.4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.47, + 0.482, + 0.538 + ], + "angle": 0, + "content": "[28] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 1, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.54, + 0.482, + 0.607 + ], + "angle": 0, + "content": "[29] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Intervid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.609, + 0.482, + 0.678 + ], + "angle": 0, + "content": "[30] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.679, + 0.482, + 0.748 + ], + "angle": 0, + "content": "[31] Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, et al. Youku-mplug: A 10 million large-scale chinese video-language dataset for pre-training and benchmarks. arXiv preprint arXiv:2306.04362, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.749, + 0.482, + 0.816 + ], + "angle": 0, + "content": "[32] Mingze Xu, Mingfei Gao, Zhe Gan, Hong-You Chen, Zhengfeng Lai, Haiming Gang, Kai Kang, and Afshin Dehghan. Slowfast-llava: A strong training-free baseline for video large language models. arXiv preprint arXiv:2407.15841, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.818, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[33] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.148 + ], + "angle": 0, + "content": "[34] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024. 1, 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.15, + 0.905, + 0.19 + ], + "angle": 0, + "content": "[35] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.192, + 0.905, + 0.26 + ], + "angle": 0, + "content": "[36] Zijia Zhao, Haoyu Lu, Yuqi Huo, Yifan Du, Tongtian Yue, Longteng Guo, Bingning Wang, Weipeng Chen, and Jing Liu. Needle in a video haystack: A scalable synthetic framework for benchmarking video mllms. arXiv preprint arXiv:2406.09367, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.263, + 0.905, + 0.33 + ], + "angle": 0, + "content": "[37] Yuan Zhi, Zhan Tong, Limin Wang, and Gangshan Wu. Mgsampler: An explainable sampling strategy for video action recognition. In Proceedings of the IEEE/CVF International conference on Computer Vision, pages 1513-1522, 2021. 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.333, + 0.905, + 0.401 + ], + "angle": 0, + "content": "[38] Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.403, + 0.905, + 0.459 + ], + "angle": 0, + "content": "[39] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.459 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "title", + "bbox": [ + 0.11, + 0.086, + 0.887, + 0.133 + ], + "angle": 0, + "content": "\\(\\mathbf{SF^{2}T}\\): Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding" + }, + { + "type": "text", + "bbox": [ + 0.383, + 0.144, + 0.613, + 0.166 + ], + "angle": 0, + "content": "Supplementary Material" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.183, + 0.483, + 0.245 + ], + "angle": 0, + "content": "In this supplementary material, Section A presents \\(\\mathrm{SF^2T}\\)'s performance on video caption tasks and additional exemplary visualizations of the attention map, while Section B provides more details about FineVidBench." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.256, + 0.321, + 0.273 + ], + "angle": 0, + "content": "A. More Results and Cases" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.282, + 0.483, + 0.418 + ], + "angle": 0, + "content": "In addition to FineVidBench and public video understanding benchmarks, we also evaluated the video caption task (Table 1) using GPT-4o mini, assessing fluency, relevance, informativeness, and correctness, with a maximum score of 40. The results show that incorporating \\(\\mathrm{SF^2T}\\) improves performance, highlighting that fine-grained understanding also benefits video captioning. However, after fine-tuning, MiniCPM-V 2.6 produced shorter responses, leading to a decrease in its informativeness score." + }, + { + "type": "table", + "bbox": [ + 0.092, + 0.426, + 0.487, + 0.513 + ], + "angle": 0, + "content": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
Base33.2032.6122.5329.76
Base+SF2T33.2929.73 ↓30.9930.05
Base(SFT)27.6229.6027.1929.66
Base(SFT)+SF2T30.5031.3128.9431.04
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.524, + 0.483, + 0.568 + ], + "angle": 0, + "content": "Table 1. Performance on video caption task. The results show that incorporating \\(\\mathrm{SF^2T}\\) yields higher scores (except MiniCPM-V 2.6), likely due to its enhanced temporal sensitivity and understanding." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.585, + 0.483, + 0.631 + ], + "angle": 0, + "content": "As shown in Figure 1, we present more attention maps for Qwen2-VL on the Action task, focusing on cases where the model's predictions were corrected after applying \\(\\mathrm{SF}^2\\mathrm{T}\\)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.643, + 0.318, + 0.659 + ], + "angle": 0, + "content": "B. Details of FinevidBench" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.668, + 0.347, + 0.684 + ], + "angle": 0, + "content": "B.1. Question-Answer Templates" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Table 2 delineates the question templates for each task. For the answers, Scene-level tasks include Action task, which are composed of the \"visual synonyms\" and other verbs; Effect task, which are scripted by researchers based on video content; and Speed task, which offer fixed options: fast, slow, normal, and no speed. Fragment-level tasks encompass Frame Count, with answers ranging from 2 to 6; Meaning of Order, using ordinal numbers as responses; Frame Comparison and Adjust or Not, with responses of Yes, No, and Not sure; and Rearrangement, where the answer is a permutation of N numbers, with N representing the number of input frames. The Question-Answer database is generated through a process of template creation followed by iterative refinement using GPT-4. For Action and Effect tasks," + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.183, + 0.907, + 0.259 + ], + "angle": 0, + "content": "each original video is queried three times using different question formulations. For Speed tasks, one query is conducted for both the original and the speed-altered versions of the video. For Fragment-Level tasks, all five questions are posed for each unique frame count." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.267, + 0.679, + 0.283 + ], + "angle": 0, + "content": "B.2. Detailed Results" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.29, + 0.613, + 0.304 + ], + "angle": 0, + "content": "- Scene Level" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.308, + 0.908, + 0.656 + ], + "angle": 0, + "content": "Table 3 illustrates the types of action effects and examples in the Effect tasks. For the affected objects, common physical attributes and quantities of objects are considered; notably, the positional relationship, spatial distance, and similarity between two objects are examined. Regarding action attributes, the intensity and completeness of the action are evaluated. Special actions include slight movement, multiple-object movements where several affected objects undergo motion, and compound movements involving two or more atomic actions linked in time. Additionally, camera movements and the inclination of the surface on which objects move are assessed. Table 4 presents the results categorized under the Effect classification. Overall, models performed well in Physical Attributes and Action Intensity, likely due to the ability to infer such information by comparing images before and after the action occurs. However, models exhibited subpar performance in Action Completion and Camera Motion. The former suggests a lack of understanding regarding the distinction between completed and incomplete actions in terms of their effects, while the latter is attributable to the inherent variability and complexity of camera movements. For other tasks, the majority of models exhibited moderate performance." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.664, + 0.642, + 0.679 + ], + "angle": 0, + "content": "- Fragment Level" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.682, + 0.907, + 0.895 + ], + "angle": 0, + "content": "Table 5 presents the results for all tasks in the fragment level under varying input frame counts. From the results, we can observe that except for Video-CCAM, the models' ability to count frames significantly declines as the frame count increases. Regarding the understanding of order concepts, most models show a clear upward trend, except for ShareGPT4Video. Models generally perform well on the frame comparison task, likely due to extensive training with image-text pairs. Since the input consistently involves two frames, the results show no significant variation, as expected. For Rearrangement, all results hover around random values, suggesting that while models recognize incorrect sequence orders, they cannot correct them, indicating a failure to grasp the dynamic processes of videos truly." + } + ], + [ + { + "type": "image", + "bbox": [ + 0.134, + 0.091, + 0.864, + 0.441 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.453, + 0.908, + 0.496 + ], + "angle": 0, + "content": "Figure 1. Four exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - \\(\\mathrm{SF^2T}\\) applied. As highlighted by the red boxes, applying \\(\\mathrm{SF^2T}\\) enables the model to better focus on action execution areas and interacting objects, while also predicting the direction of motion." + }, + { + "type": "table", + "bbox": [ + 0.093, + 0.506, + 0.918, + 0.858 + ], + "angle": 0, + "content": "
TasksQuestion
Scene LevelActionWhich activity can be seen in the video?
EffectAfter the action takes place, what changes occur to the object?
During the process of the action, what changes occur to the object?
After the action takes place, what changes occur in the field of vision?
SpeedWhat is the rate of movement in the video?
Fragment LevelFrame CountCould you please tell me how many frames I have inputted?
Meaning of OrderIn the sequence of frames provided, on which frame does the object first appear?
In the sequence of frames provided, on which frame does the object last appear?
In the sequence of frames provided, in which frames does the object exist?
Frame ComparisonAre the two frames I provided exactly the same?
Adjust or NotThese frames are all from the same video and capture the dynamic process of an action. The order of these frames may have been mixed up. Do we need to rearrange them to match the normal execution sequence of the action?
RearrangementThese frames are all from the same video and depict the dynamic process of an action. The order of these frames may have been mixed up. Based on the connections between the image frames, which of the following options represents the most appropriate sequence?
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.868, + 0.908, + 0.896 + ], + "angle": 0, + "content": "Table 2. Question templates authored by researchers undergo revision by GPT-4o, which rephrases them to maintain the original intent while introducing varied sentence structures and vocabulary." + } + ], + [ + { + "type": "table", + "bbox": [ + 0.101, + 0.088, + 0.902, + 0.887 + ], + "angle": 0, + "content": "
Effect TypeExamples
Object PropertiesPhysical PropertiesWhat modifications occur to the wafer stick as a result of the action? \nA. Not sure B. Nothing happened C. It broke D. It deformed
QuantityOnce the action occurs, what changes are made to the mugs? \nA. There are about 5 or 6 mugs here B. There are about 1 or 2 mugs here \nC. There are about 3 or 4 mugs here D. Not sure
Object RelationshipsPositionWhat adjustments take place in the egg following the action? \nA. An object appeared on top of it B. An object appeared in front of it \nC. An object appeared inside it D. An object appeared behind it
DistanceWhat changes happen to the chili and the cucumber after the action is performed? \nA. They grew more distant B. It's unclear \nC. They came nearer D. Their separation remained consistent
SimilarityWhat adjustments take place in the box following the action? \nA. One thing appeared above it \nB. Several things appeared above it, and they looked different from each other \nC. Not sure \nD. Several things appeared above it, and they looked similar to each other
Action PropertiesIntensityWhat alterations are observed in the paper cups after the action is taken? \nA. Not sure B. It collapsed C. It broke D. It remained standing
CompletionAfter the action is done, what modifications occur to the onion? \nA. It appears unchanged from how it was initially \nB. Something was visible at the back of it \nC. An item appeared on its surface \nD. Something was detected below it
Special ActionsSlight MovementWhat adjustments take place in the shower pouf during the action? \nA. I'm uncertain B. It dropped to the ground C. It was nearly at rest D. It ascended
Mutiple-ObjectWhat happens to the two chargers while the action is executed? \nA. They crossed paths B. They impacted each other \nC. They proceeded in the same direction D. It's unclear
CompoundDuring the process of action, what modifications are observed in the plate? \nA. It fell after leaving the hand and did not come back \nB. It was continuously held without any separation \nC. It was detached from the hand but later reattached \nD. Unclear
OthersCamera movementWhat alterations are evident in the flower while the action is carried out? \nA. It appeared to move to the right in view B. It appeared to ascend in view \nC. It appeared to move to the left in view D. I can't determine
Surface InclinationAfter the action is taken, what changes are noticed in the cup? \nA. It was stationary on a tilted surface B. It was stationary on a horizontal surface \nC. Not sure D. It rolled down a sloped surface
" + }, + { + "type": "table_caption", + "bbox": [ + 0.411, + 0.898, + 0.587, + 0.911 + ], + "angle": 0, + "content": "Table 3. Types of Effect Task" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.108, + 0.113, + 0.892, + 0.421 + ], + "angle": 0, + "content": "
Effect Type (Random: 25.00)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4-VideoVideo-CCAMAvg.
Object PropertiesPhysical Properties44.2049.2852.1760.8747.5463.4852.92
Quantity33.3347.6256.1958.1041.9060.9549.68
Object RelationshipsPosition41.0351.2849.2354.3640.3150.3647.76
Distance39.5646.6740.8940.4440.4448.4442.74
Similarity42.8649.5247.6252.3838.1059.0548.25
Action PropertiesIntensity40.2750.6753.3361.3352.5362.1353.38
Completion39.3143.6838.8535.6348.0534.0239.92
Special ActionsSlight Movement47.9243.7541.6772.9235.4254.5849.38
Multiple-Object50.0060.6776.6766.6740.6758.6758.89
Compound48.1544.4451.1152.5935.5653.3347.53
OthersCamera Movement33.3322.2228.8926.6732.2228.8928.70
Surface Inclination28.5749.5258.5760.4841.4351.4348.33
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.432, + 0.907, + 0.476 + ], + "angle": 0, + "content": "Table 4. The results of the Effect task, dissected into more granular categories. Overall, Qwen2-VL achieved the best results, with Video-CCAM closely following. Notably, models exhibit suboptimal performance in distinguishing completed from incomplete actions, indicating a lack of ability to associate actions with the resulting state changes of objects." + }, + { + "type": "table", + "bbox": [ + 0.097, + 0.532, + 0.902, + 0.832 + ], + "angle": 0, + "content": "
Input(Random)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4VideoVideo-CCAM
3q125.0020.3393.8242.8697.2560.9914.18
q225.0019.2348.9035.7129.1276.1538.35
q333.3346.9680.6671.2771.8288.4166.34
q433.3369.2365.3881.5480.0075.5580.06
q525.0023.8523.0833.0827.6923.6823.36
4q125.0019.7790.6639.8996.6316.788.96
q225.0024.1660.6741.0133.1565.4243.65
q333.3358.7678.5376.8477.4087.2363.63
q433.3374.4279.8593.8095.3587.5094.46
q525.0019.3814.7324.8120.9323.1022.94
5q125.0017.9886.447.4596.050.0047.61
q225.0028.8159.8950.2837.8541.0055.24
q333.3355.6867.6180.1174.4389.6964.83
q433.3382.8184.3894.5396.8891.5596.49
q525.0018.7516.4122.6618.7523.2923.92
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.845, + 0.907, + 0.874 + ], + "angle": 0, + "content": "Table 5. The results of all tasks in Fragment-Level under varying input frame counts. Questions q1 through q5 correspond to Frame Count, Meaning of Order, Frame Comparison, Adjust or Not, and Rearrangement, respectively." + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_origin.pdf b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..367b7fd048954d0175093c20cccd430a388678b8 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/dc771de3-3dba-4b91-9d66-c6d31ae45ee8_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a8cccb46c982a9bbcc524bfc1a88b40fe43cb9db6649e98c95c51c05c267f87 +size 2885787 diff --git a/data/2025/2504_07xxx/2504.07745/full.md b/data/2025/2504_07xxx/2504.07745/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c206c7f5eae6364717e89b3a0d13814a4d28e011 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/full.md @@ -0,0 +1,382 @@ +# $\mathbf{SF}^2 \mathbf{T}$ : Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding + +Yangliu Hu $^{1}$ , Zikai Song $^{1\dagger}$ , Na Feng $^{1}$ , Yawei Luo $^{2}$ , Junqing Yu $^{1}$ , Yi-Ping Phoebe Chen $^{3}$ , Wei Yang $^{1\dagger}$ $^{1}$ Huazhong University of Science and Technology $^{2}$ Zhejiang University $^{3}$ La Trobe University + +{huyangliu,skyesong,fengna,yjqing,weiyangcs}@hust.edu.cn + +yaweiluo@zju.edu.cn phoebe.chen@latrobe.edu.au + +# Abstract + +Video-based Large Language Models (Video-LLMs) have witnessed substantial advancements in recent years, propelled by the advancement in multi-modal LLMs. Although these models have demonstrated proficiency in providing the overall description of videos, they struggle with fine-grained understanding, particularly in aspects such as visual dynamics and video details inquiries. To tackle these shortcomings, we find that fine-tuning Video-LLMs on self-supervised fragment tasks, greatly improve their fine-grained video understanding abilities. Hence we propose two key contributions: (1) Self-Supervised Fragment Fine-Tuning $(SF^2 T)$ , a novel effortless fine-tuning method, employs the rich inherent characteristics of videos for training, while unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it relieves researchers from labor-intensive annotations and smartly circumvents the limitations of natural language, which often fails to capture the complex spatiotemporal variations in videos; (2) A novel benchmark dataset, namely FineVidBench, for rigorously assessing Video-LLMs' performance at both the scene and fragment levels, offering a comprehensive evaluation of their capabilities. We assessed multiple models and validated the effectiveness of $SF^2 T$ on them. Experimental results reveal that our approach improves their ability to capture and interpret spatiotemporal details. + +# 1. Introduction + +Large Language Models (LLMs) have showcased significant emergent capabilities, such as in-context learning [19], instruction-following [23], and chain-of-thought reasoning [30], driven by expansive datasets and advanced model architectures. Extending these advancements, Video-LLMs through mechanisms like pooling or query aggregation + +![](images/5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg) +Figure 1. Performance w/ and w/o $\mathbf{SF}^2\mathbf{T}$ . We evaluated four advanced Video-LLMs w/ and w/o $\mathrm{SF}^2\mathrm{T}$ on our proposed FineVidBench with two baselines: (1) Base: performance without any fine-tuning (blue dashed), and (2) Base (SFT): performance with supervised fine-tuning (red dashed). After applying $\mathrm{SF}^2\mathrm{T}$ , all models showed significant improvements (solid blue and red), underscoring its broad effectiveness. + +across numerous visual tokens, have broadened the scope of LLMs to encompass video information processing [11, 14, 35]. This evolution markedly advances their potential for in-depth real-world comprehension, opening applications in intelligent surveillance, virtual reality, and autonomous driving, further enriching the landscape of video analytics and interpretation. + +Various Video-LLMs, exemplified by GPT4-V, VideoLaMA 2 [4], MiniCPM-V [34], and Qwen2-VL [28], have been crafted by leading corporations and research institutions, demonstrating proficiency in capturing the overarching content of videos. When adapting to new videos and tasks, they predominantly rely on Supervised FineTuning (SFT) [26] or Reinforcement Learning from Hu + +man Feedback (RLHF) [39], both of which are heavily contingent upon extensive manual annotation. This dependence poses several key problems: (1) it necessitates substantial human resources, particularly highly trained annotators; (2) the inherent complexity of video content and task demands frequently introduces inconsistencies and subjectivity, rendering the maintenance of high-quality annotations particularly arduous; and (3) subtle temporal variations across video frames are challenging to articulate with precision, often yielding generalized descriptions that constrain the Video-LLMs' potential. Consequently, existing Video-LLMs struggle with fine-grained video understanding tasks, particularly in aspects such as visual dynamics (e.g., motion patterns, object interactions) and video details inquiries (e.g., positional changes, detail variations). + +To address these challenges, we observe that finetuning Video-LLMs with self-supervised fragment tasks, by "fragment" we mean temporal frame level specifications of the video, could improve the model's sensitivity to spatiotemporal scene-level details (related to video contents). Driven by this, we introduce the Self-supervised Fragment Fine-Tuning $(\mathrm{SF}^2\mathrm{T})$ , a effortless fine-tuning strategy for Video-LLMs that help to improve the fine-grained video understanding. $\mathrm{SF}^2\mathrm{T}$ consists of five fragment-level tasks—Counting, Consistency Verification, Localization, Disorder Detection and Rearrangement—that automatically generate labels from various spatiotemporal perspectives. This approach maximizes the use of frame-level information while minimizing reliance on complex human instructions and annotations. + +Moreover, to evaluate the fine-grained visual dynamic perception of Video-LLMs and fully demonstrate the effectiveness of our $\mathrm{SF}^2\mathrm{T}$ , we present the FineVidBench, a novel benchmark. FineVidBench comprises 910 videos and 22,718 question-answer pairs, with videos sourced from diverse public datasets, including Something-Something V2 (SSv2) [6], Moments in Time (MiT) [21], etc. The question-answer pairs are auto-generated in single-choice format, incorporating distractors to increase testing difficulty. We evaluated several notable Video-LLMs developed in recent years, and find they generally fail to understand the execution sequence of actions and struggling to grasp fine-grained spatiotemporal information. While after fine-tuning with $\mathrm{SF}^2\mathrm{T}$ , the Video-LLMs better recognize spatiotemporal details, leading to a holistic and marked improvement in fine-grained understanding. + +# 2. Related Work + +Video-LLMs Finetuning Video-LLMs are primarily finetuned by adjusting the parameters of small, trainable adapters for task adaptation, without changing the entire model, saving resources and enhancing efficiency. The connective adapter (e.g., MLP/Linear Layer [15], Q + +former [10]) links the Video Embedder and LLM, aligning video embeddings with LLM input tokens, while insertive adapters (e.g., LoRA [8]) are directly integrated into the LLM to modify its behavior. Most Video-LLMs combine both types of adapters and typically use multi-stage finetuning [4, 11, 13, 24, 35]. First, the model learns to establish relationships between images, videos, and text using large-scale multimodal datasets [1, 2, 29, 31]. In the second stage, the model is fine-tuned with an curated instruction-following dataset [11, 17, 18]. Besides, there are full finetuning, which updates all LLM parameters with a lower learning rate [25, 33], and zero-shot models, which transforms the video task into a text task, typically relying on a powerful LLM [32]. However, annotating video data remains a labor-intensive and time-consuming task, particularly for long videos or those involving complex actions. + +Benchmarks on Video-LLMs Currently, many studies [3, 5, 38] focus on evaluating the temporal perception capabilities of Video-LLMs. MVBench [12] designs 20 tasks from temporal and spatial perspectives, and Tempcompass [16] introduces 5 temporal aspects and 4 task formats. VN-Bench [36] decouples video content from the QA pairs by inserting irrelevant images or text "needles" into the original video. Moment-10M [22] has constructed a large-scale dataset on temporal localization tasks. However, as illustrated in Table 1, these studies often focus on gathering diverse videos or evaluating the models' performance with long videos, while somewhat neglecting the models' ability to perform fine-grained perception of temporal details. To address this gap, FineVidBench breaks videos into multiple sets of frames and generates annotations from diverse spatiotemporal perspectives, introducing novel evaluation methods for fine-grained understanding. + +
BenchmarksVideo num.QA num.Input ChangeTemporal DiversityFine-Grained EvaluationHierarchical Test
Video-MME9002700XXXX
TempCompass4107540XX
VN bench-1350XX
Moment-10M64.9k10.4MXXXX
AutoEval-Video327327XXXX
MV bench36414000XXX
MLVU13342593XXXX
FineVidBench91022,718
+ +Table 1. Comparison with related benchmarks. Our approach offers significant advantages in input formats, evaluation methods, granularity, and temporal diversity. + +# 3. FineVidBench Benchmark + +It is broadly recognized that Video-LLMs struggle with fine-grained video understanding tasks, yet no comprehensive benchmarks exist to thoroughly investigate this issue. + +To address this gap, we introduce FineVidBench, a multidimensional, fine-grained evaluation framework specifically designed to assess and improve the overall capabilities of Video-LLMs. + +# 3.1. Construction + +Data collection We selected videos from various public datasets, including SS-v2 [6], MiT [21], and Ego4D [7], with a particular emphasis on temporally-sensitive content, to focus the model on the entire video sequence rather than individual frames. + +Action categorization As shown in Figure 2, we compiled 52 actions, categorizing them into 3 types based on intraclass variance. The distribution varies significantly: "Distinctive Actions" $(39\%)$ are easily recognizable, encompassing a total of 36 actions. "Non-typical Actions" $(57\%)$ refer to flexible actions with no clear defining characteristics, spanning 14 types. The broad diversity and complexity in this category require more extensive video coverage to adequately capture the range of expressions and variations. "Slight Movements" $(4\%)$ represent subtle actions, such as "hold" and "show", which are difficult to detect with the naked eye and constitute a small proportion. + +Data augmentation The original videos were augmented using frame interpolation and skipping techniques for speed transformation, along with a motion-salient area sampling algorithm to capture dynamic motion. This process generated speed-varied versions and multiple sets of keyframes for each video. + +Statistics With our augmentation strategy, FineVidBench includes 910 videos, 1,820 speed-variant videos, and 2,670 sets of keyframes enriched with dynamic visual information. Building on this, we generated 22,718 QA pairs from the video content through a combination of automated processes and manual review. The quality assurance process involved rigorous cross-verification, where reviewers checked each QA pair for accuracy and contextual relevance, making corrections to ensure high quality. + +# 3.2. Benchmarking Dimensions + +As shown in Figure 3, FineVidBench encompasses both scene-level and fragment-level evaluations. The scene-level evaluation assesses both original and speed-adjusted videos across three dimensions: (1) Action, which evaluates the model's holistic understanding of video content. To increase difficulty, "Visual Synonyms" are added as distractors, requiring VideoLLM to distinguish visually similar actions with subtle differences, a challenge common in real-world scenarios. (2) Effect, which focuses on the model's comprehension of the visual changes resulting from actions. This understanding is essential for revealing object properties and interpreting complex dynamic scenes, and could significantly enhance the reasoning capabilities of Video- + +![](images/93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg) +Figure 2. We show the action semantics and their respective proportions in FineVidBench. Distinctive Action: easily recognizable actions. Non-typical Action: flexible actions with no clear characteristics, like "put" and "move." Slight Movement: subtle actions, such as "hold" and "show," difficult to detect with the naked eye. + +LLMs and LLM-aided agents. (3) Speed, which tests the model's sensitivity to changes in video speed and its capability to maintain consistent understanding across varying speeds, with slow motion revealing hidden details and fast motion obscuring them. This capability is crucial for optimizing the model's performance across diverse scenarios. + +For fragment-level evaluation, We've designed a structured evaluation format for video dynamic keyframes, employing a step-by-step inquiry framework: (1) Frame Count: Models are queried on the number of frames in sequences using dynamically refined keyframes to assess counting accuracy. (2) Meaning of Order: Understanding of sequence order is tested by asking about the first or last frames the targets appear in, or the frames they are present. e.g., "At which frame does the target object first appear?". (3) Frame Comparison: Two frames are randomly selected from the sequence for visual comparison, with differences varying in size but generally staying within human visual comfort limits. (4) Adjust-or-Not and Rearrangement: These two tasks involve a shuffled sequence of keyframes, and the model is asked to determine whether the order needs adjustment and, if so, how to correct it. They evaluate the model's ability to understand and restore the video's temporal sequence. + +# 3.3. Benchmark Results + +We evaluated six of the most advanced open-source models: LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4], Qwen2-VL[28], ShareGPT4Video [2] and + +![](images/b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg) +※ Fragment-Level Tests ※ + +① How many frames? +A. 2 B. 3 C. 4 D. 5 +(2) Which frames show the cup? +A. 3,4 B. 2,3,4 C. 2,3 D. 1,2,3 +(3) Are the two frames the same? +A. Yes, they are exactly the same +B. No, they are different +④ Should I adjust them? +A. Yes, they need adjustment +B. No, they are in the correct order +⑤ Which shows the correct order? +A. 1234 B. 2314 C. 3142 D. 4321 + +![](images/e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg) +Figure 3. FineVidBench evaluates videos augmented with speed variations and fragments. Scene-level tests include the following: Action: Tests recognition accuracy amidst distractors like "Visual Synonyms". Effect: Assesses the model's ability to identify pre- and post-action changes. Speed: Measures the model's sensitivity to changes in video speed. Fragment-level tests, employing a step-by-step inquiry framework, focus on challenges such as Frame Count, Meaning of Order, Frame Comparison, Adjust-or-Not and Rearrangement. + +![](images/ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg) + +Video-CCAM [27], each employing different architectures and training strategies. Table 3 summarizes the results across the eight tasks. We discuss the results from scene-level and fragment-level. + +# - Scene-level Results and Analysis + +Action The scores for this task varied significantly, with models trained in relevant video data—such as Video-CCAM, Qwen2-VL, and VideoLLaMA 2.1—achieving notably higher performance. However, as shown on the left side of Table 2, interference from "Visual Synonyms" prevented these models from achieving their full potential, resulting in declines of varying degrees and indicating difficulties in distinguishing visually similar actions. + +Effect All models exhibited average performance on this task, indicating a superficial understanding of aspects such as object attributes, object relationships, and action properties. This task tests the model's ability to grasp how actions affect objects, focusing on causal relationships and temporal reasoning—particularly for actions like "push" and "pull", which share similar execution flows. The model must distinguish them based on dynamic effects, such as changes in direction and speed, but most models perform moderately in this regard. + +Speed The results show that all models are insensitive to speed variations, likely because they were not adequately exposed to speed changes during training. Figure 4 shows that models are more sensitive to slow motion than fast playback, and struggled with identifying "normal speed" and "no speed", except for VideoLLaMA 2.1. This may be due to the loss of coherence in fast-moving video content, while slow-motion videos highlight more distinct details, aiding the model in making accurate judgments. + +![](images/64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg) +Figure 4. Accuracy across different video speeds. All models are more sensitive to slow-speed videos and struggle to understand "normal speed" and "no speed", except for VideoLLaMA 2.1. + +
Video-LLMsActionFrame Number
w/o VSw/ VSAvg.345
LLaVA-NeXT-Video37.3135.0419.3720.3319.7717.98
MiniCPM-V 2.643.3740.1590.3293.8290.6686.44
Video-LLaMA 2.163.2653.9830.1742.8639.897.45
Qwen2-VL68.1856.6296.6597.2596.6396.05
ShareGPT4Video46.9030.8426.3360.9916.780.00
Video-CCAM73.1060.2323.4514.188.9647.61
+ +Table 2. Left: Accuracy of the Action task with or without "Visual Synonyms". It is obvious that the "Visual Synonyms" have significantly impacted the model's judgment. Right: Accuracy of the counting task across different frame counts. Except for Video-CCAM, all other models exhibited a decline in performance as the number of frames increased. + +
Video-LLMsParams.Scene-LevelFragment-LevelS-Avg.FG-Avg.A-Avg.
ActionEffectSpeedFCntMoOFCmpAoNRearr
(Random)-25.0025.0025.0025.0025.0033.3333.3325.0025.0028.3327.08
LLaVA-NeXT-Video7B37.3142.6722.3519.3724.0253.7575.4520.6734.1138.6536.95
MiniCPM-V 2.68B43.3752.5619.1390.3256.4275.6676.4918.0938.3563.4054.01
Video-LLaMA 2.17B63.2650.9219.8930.1742.2776.0189.9226.8744.6953.0549.91
Qwen2-VL7B68.1857.1424.6296.6533.3374.5390.7022.4849.9863.5458.45
ShareGPT4Video8B46.9043.8831.7626.3361.0588.4484.8023.3640.8557.1150.82
Video-CCAM9B73.1055.9031.6523.4545.6664.9590.2722.7253.5548.4750.96
+ +Table 3. The overall performances of notable Video-LLMs on FineVidBench. FCnt: Frame Count. MoO: Meaning of Order. Fcmp: Frame Comparison. AoN: Adjust or Not. Rearr: Rearrangement. S-Avg.: the average performance of scene-level tasks; FG-Avg.: the average performance of fragment-level tasks. A-Avg.: the average performance of all tasks. + +# - Fragment-level Results and Analysis + +(1) Frame-count accuracy varied significantly across models, with the lower-performing models likely lacking targeted training. The trend shown in the right side of Table 2, where accuracy decreases as frame count increases, highlights the models' insufficient temporal reasoning on longer sequences. (2) ShareGPT4Video and MiniCPM-V 2.6 showed better comprehension in the Meaning-of-Order task, while other models lagged, suggesting a lack of explicit focus on "order". (3) Most models excelled in frame comparison due to image-text alignment training. ShareGPT4Video achieved the best performance, owing to its Differential Sliding-Window Captioning (DiffSW) strategy, which emphasizes capturing the changes between frames when generating video descriptions. This also improved its Meaning-of-Order performance. (4) In the sorting task, models generally succeeded in the "Adjust or Not" response but performed poorly in the more complex "Rearrangement" task, indicating they can detect, but not correct, sequence errors. + +# 4. Self-supervised Fragment Finetuning + +The above benchmark results show the existing Video-LLMs generally fail to tackle fine-grained video understanding tasks. Videos often contain subtle, complex changes that natural language alone fails to fully capture. The core component of Video-LLMs, LLMs, as generalized pattern recognizers, offers a promising solution. LLMs have the potential to detect and interpret intricate spatiotemporal dynamics that were previously difficult to represent. Given that these changes cannot be directly annotated, using self-supervised learning naturally becomes the solution, bypassing the bottleneck of manual annotation and significantly re + +ducing labeling costs. Given these factors, we propose the $\mathrm{SF^2T}$ to fine-tune Video-LLMs. While we do not expect $\mathrm{SF^2T}$ to replace the supervised fine-tuning, instead it's an effortless complementary to SFT. Comparing $\mathrm{SF^2T}$ with SFT, they primarily differ in data construction and content focus level, with each method aligned with distinct training objectives as shown in Figure 5. + +# 4.1. SFT Tasks + +We first review the common SFT tasks to set a baseline for comparing our $\mathrm{SF^2T}$ + +General QA on Video Content This method focuses on understanding the main events and context of a video by directly asking questions about its content. While effective for grasping the video's key moments, it lacks finer spatiotemporal details and requires significant human effort to create standardized but constrained answers. + +Frame Description Integration This method typically samples video frames evenly, generates detailed descriptions for each, and integrates them into a cohesive but lengthy summary. While it enhances the model's understanding of continuity and micro-dynamics, it often proves incapable of capturing complex or subtle details that are beyond natural language's scope. Moreover, although frame descriptions can be generated using powerful multi-model LLMs like GPT-4o, significant human effort is still required to review the quality of the generated responses. + +# 4.2. Fragment-level Tasks of $\mathbf{SF}^2\mathbf{T}$ + +SFT tasks require manual annotations, and even automation annotation is labor-intensive and error-prone. To address, we introduce $\mathrm{SF}^2\mathrm{T}$ which generates accurate fragment-level labels accurately. $\mathrm{SF}^2\mathrm{T}$ comprises five tasks—Counting, Consistency Verification, Localization, Disorder Detection + +![](images/19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg) + +What is the main content of the video? + +The video shows a person bowling, including their four-step approach, the smooth release of the ball down the lane, its path toward the pins, and... + +![](images/fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg) + +What is the main content of the video? + +The video shows a person bowling: (Frame 1) The scene shows a bowling alley... (Frame 2) The player swing the bowling ball... (Frame 4) The bowling ball approaches the pins... (Frame 6) The bowling ball strikes the pins... (Frame 8) All the pins are down. + +![](images/e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg) + +![](images/b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg) +Scene-Level Tasks + +![](images/467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg) + +How many frames? + +On which frames? + +Same frames? + +Adjust or not? + +Rearrange it. + +![](images/28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg) + +![](images/b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg) +Figure 5. Comparison between $\mathrm{SF}^2\mathrm{T}$ and SFT. SFT depends on manual and model-driven design to generate QA pairs for scene-level video understanding, $\mathrm{SF}^2\mathrm{T}$ , in contrast, automatically constructs training data based on pre-defined rules that cover various temporal and spatial aspects of the video. $\mathrm{SF}^2\mathrm{T}$ enables the model to focus on a fine-grained content analysis, and offering insights that supervised labels cannot achieve. + +![](images/5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg) + +![](images/0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg) + +![](images/b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg) + +![](images/429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg) + +2nd + +No + +Yes + +3412 + +# Fragment-Level Tasks + +and Rearrangement—designed to train the model to rearrange a set of out-of-order frames into their original sequence. This is a robust indicator of a modal's mastery over the visual dynamics of an action, requiring the model to detect subtle frame changes and understand the overall coherence and temporal trends. Mastery of these tasks enables the model to recognize frames and their temporal re + +relationships, enhancing its ability to predict and reconstruct action sequences and improving performance on more complex video tasks. Our method first extracts multiple sets of dynamic keyframes from each video. These fragments capture the key dynamic information from multiple temporal perspectives, offering a more efficient representation of redundant video data. It then applies pseudo-labeling, distinguishing it from traditional video-level labeling. By designing proxy tasks that leverage intrinsic information rather than predefined prior knowledge, it smartly circumvents the annotation bottleneck, enabling a deeper temporal understanding and offering insights that traditional video-level labeling cannot achieve. + +Counting We input N frames into the Video-LLM and ask it to count them. Although this task seems straightforward, it proves challenging for current Video-LLMs, particularly as the number of frames increases, revealing a decline in accuracy. The model's inability to perform basic quantitative tasks points to a broader limitations in understanding the overall sequence integrity. + +Consistency Verification Video-LLMs are tasked with identifying two frames sampled from the same video, which may show subtle differences. This task sharpens the model's sensitivity to visual details by encouraging a thorough analysis and comparison of the images, countering its tendency to focus on primary subjects while neglecting the background and other subtle features. + +Localization Video-LLMs must accurately locate a specified target (from video metadata) within a sequence of frames, identifying the frames in which it appears, disappears, or persists. This naturally human ability is a significant challenge for these models, as they often struggle to perceive sequential relationships between frames and face additional obstacles, such as occlusion, interference from similar objects, lighting variations, and memory limitations. + +Disorder Detection and Rearrangement Video-LLMs must determine whether and how to adjust the order of a given frame sequence. When frames are randomized, the loss of spatiotemporal coherence and logical continuity makes it exceptionally challenging to reconstruct their original sequence, especially as interactions within frames become more complex [20]. This task is evaluated in two ways: the yes/no task tests the model's sensitivity to temporal consistency, while the sorting task, which leverages capabilities from the other four tasks, requires advanced reasoning and adjustments. + +# 5. Experiments + +In this section, we fine-tuned four of the most advanced open-source Video-LLMs using the $\mathrm{SF}^2\mathrm{T}$ method to evaluate its effectiveness, alongside ablation studies and interpretability analyses to explore the underlying mechanisms. + +
MethodsLLaVA-NEXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VL
ActionEffectSpeedActionEffectSpeedActionEffectSpeedActionEffectSpeed
Base37.3142.6722.3543.3752.5619.1363.2650.9219.8968.1857.1424.62
Base+SF2T48.6743.7724.8365.9160.6228.6067.4257.3331.6373.8663.3731.92
Base(SFT)62.6944.6322.3577.6575.0970.8377.6565.9429.7378.6066.3030.87
Base(SFT)+SF2T63.0745.2432.0181.6376.9286.7479.7368.6831.8281.2573.2632.38
+ +Table 4. Performance on FineVidBench. We tested on two baselines: (1) Base: Results without any fine-tuning. (2) Base(SFT): Results after fine-tuning in supervised way. After $\mathrm{SF}^2\mathrm{T}$ , all models improved in all three tasks, highlighting its broad effectiveness and the value of fragment-level tasks in enhancing scene-level comprehension. Notably, $\mathrm{SF}^2\mathrm{T}$ outperformed SFT in the Speed task (except MiniCPM-V 2.6), highlighting the key role of fine-grained temporal understanding in distinguishing video speeds. + +
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
MVBench
Base36.8440.2354.1855.97
Base+SF2T42.9256.0257.9763.76
Video-MME(no subtitle)
Base29.7643.1749.0243.77
Base+SF2T34.8453.1951.8853.60
MLVU
Base36.3241.5852.3242.81
Base+SF2T41.9155.3256.1154.67
+ +Table 5. Performance on public benchmarks. $\mathrm{SF}^2\mathrm{T}$ consistently enhances performance across all three benchmarks, reaffirming its effectiveness as a spatiotemporal enhancer. + +
Methodsrandomuniformkeyframemotion-salient
SF2T70.3171.6772.1173.86
+ +Table 6. Impact of sampling. As shown, motion-salient area sampling outperforms others by better capturing motion fluidity and temporal details, while the other methods fail to fully utilize their potential, leading to suboptimal performance. + +
Methodslongshortrandom
SF2T69.3871.4073.86
+ +Table 7. Impact of temporal span. Both long- and short-range temporal modeling reduced $\mathrm{SF}^2\mathrm{T}$ 's performance, emphasizing the importance of multi-scale temporal modeling. + +# 5.1. Implementation Details + +To ensure fairness, experiments were conducted on LoRA-compatible models, including LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4] and Qwen2-VL[28], using their default or recommended settings, with all models trained for one epoch. All experiments were performed under identical hardware conditions, utilizing NVIDIA A100 40GB GPU for computation. It should be emphasized that our goal is to validate the effectiveness of $\mathrm{SF}^2\mathrm{T}$ , not to optimize models for maximum performance. + +We randomly sampled videos from SSv2 and MiT for training, ensuring no overlap with the FineVidBench dataset. MGSampler [37] was used to extract N sets of M-frame sequences from each video, capturing dynamic changes while preserving overall characteristics. M is chosen based on the video's characteristics to capture content flow, while N is determined by content complexity, with more complex content requiring a larger N to cover more temporal perspectives. In this study, we set $\mathrm{N} = 3$ and M between 3 and 5, though these values may vary for other + +datasets. We then generated QA pairs for each frame sequence based on the five tasks defined in $\mathrm{SF}^2\mathrm{T}$ for training. Evaluations were performed on FineVidBench's scene-level tasks, including Action, Effect, and Speed. To compare with traditional SFT, we also generated and manually reviewed QA pairs for these videos in a supervised setting. + +# 5.2. Comparisons + +Table 4 summarizes the results of the scene-level tasks. After $\mathrm{SF}^2\mathrm{T}$ training, all models showed significant improvement, emphasizing that fragment-level tasks can notably enhance scene-level comprehension. Integrating $\mathrm{SF}^2\mathrm{T}$ with SFT is also leads to performance gains, demonstrating that fragment-level training positively impacts SFT and enhances its effectiveness. Surprisingly, in the Speed task, many base models outperformed SFT after applying $\mathrm{SF}^2\mathrm{T}$ , highlighting the importance of fine-grained temporal understanding in distinguishing video speeds. This improvement likely stems from $\mathrm{SF}^2\mathrm{T}$ 's ability to enhance the model's sensitivity to temporal cues, such as the loss or enhancement of + +information during acceleration or deceleration, as well as content coherence—all crucial for speed judgment. As expected, $\mathrm{SF}^2\mathrm{T}$ currently lags behind SFT, since its training objective is not fully aligned with scene-level tasks. However, we do not expect $\mathrm{SF}^2\mathrm{T}$ to replace supervised finetuning; rather, our experiments suggest that it can serve as an effortless and effective complement to SFT. + +In addition to FineVidBench, we evaluated $\mathrm{SF}^2\mathrm{T}$ on three public video understanding benchmarks (Table 5). The results demonstrate consistent improvements across various video tasks, validating $\mathrm{SF}^2\mathrm{T}$ as an effective spatiotemporal enhancer for a wide range of video understanding tasks. All models were tested with an 8-frame input. + +# 5.3. Ablation and Interpretability Analyses + +We evaluated the impact of frame sampling strategies on $\mathrm{SF}^2\mathrm{T}$ , as each method provides a unique "temporal information perspective" that influencing video understanding performance. As shown in Table 6, we assessed four strategies on Qwen2-VL in the Action task: random, uniform interval, keyframe, and motion-salient area sampling [37]. Motion-salient area sampling performed best, likely due to its ability to capture continuous motion dynamics, thereby enhancing the model's understanding of action fluidity and temporal detail. In comparison, the other methods had limitations: keyframe sampling misses intermediate action phases, fixed-interval sampling may overlook critical moments, and random sampling lacks temporal consistency. Notably, different datasets may favor different strategies. For example, some datasets may perform better with uniform interval sampling, or their motion features may align better with the model's specific capabilities. + +We examined the effects of long- and short-range temporal modeling on $\mathrm{SF}^2\mathrm{T}$ . In the Consistency Verification task, we constrained the random selection of frame pairs to adjacent frames for local continuity or non-adjacent frames to capture long-range dependencies. As shown in Table 7, both settings decreased $\mathrm{SF}^2\mathrm{T}$ 's performance on the Action task of Qwen2-VL, indicating that an overemphasis on either long- or short-range information leads to temporal imbalance and incomplete dynamics. This underscores the importance of combining both approaches to leverage their broader temporal span and frame variations for a more comprehensive feature representation. + +We analyzed the attention map of Qwen2-VL on the Action task, particularly in cases where the model's predictions were corrected after $\mathrm{SF}^2\mathrm{T}$ . As shown in Figure 6, we found that $\mathrm{SF}^2\mathrm{T}$ enhances the model's ability to capture fine-grained spatial changes and temporal dynamics. (1) Spatial Aspects. After $\mathrm{SF}^2\mathrm{T}$ , the model shows increased attention to action execution areas, particularly the hands and objects they interact with. It shows better sensitivity to small targets, likely due to the Consistency Verification + +![](images/3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg) +Figure 6. Two exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - $\mathrm{SF^2T}$ applied. As shown by the red boxes, after applying $\mathrm{SF^2T}$ , the model better focuses on action execution areas and interacting objects. The $\mathrm{SF^2T}$ fine-tuned model has the ability to predict the direction of motion, as seen in the trajectories of the red bottle and Cheerios. + +task, which enhances spatial perception by refining sensitivity to subtle image differences. (2) Temporal Aspects. After $\mathrm{SF}^2\mathrm{T}$ , we observed that the model can predict object movement trajectories in certain actions, indicating an advanced level of temporal understanding. This ability likely stems from the sorting task, which strengthens the model's comprehension of action flows and movement patterns. + +# 6. Conclusion + +In this work, we propose $\mathrm{SF}^2\mathrm{T}$ to overcome the limitations of Video-LLMs in fine-grained video understanding. $\mathrm{SF}^2\mathrm{T}$ is an innovative fine-tuning method that eliminates the need for labor-intensive annotations and effectively bypasses the constraints of natural language descriptions. Additionally, we introduce FineVidBench, a benchmark for evaluating Video-LLMs at both scene and fragment levels. In the future, we plan to expand our dataset with larger videos and more tasks to increase its impact. + +# Acknowledgments + +This work is supported by the National Key Research and Development Program of China (No.2020YBF2901202), National Natural Science Foundation of China (NSFC No. 62272184 and No. 62402189), the China Postdoctoral Science Foundation under Grant Number GZC20230894, the China Postdoctoral Science Foundation (Certificate Number: 2024M751012), and the Postdoctor Project of Hubei Province under Grant Number 2024HBBHCXB014, and the "Pioneer" and "Leading Goose" R&D Program of Zhejiang (No. 2024C01161). The computation is completed in the HPC Platform of Huazhong University of Science and Technology. + +# References + +[1] FirstName Alpher. Frobnication. IEEE TPAMI, 12(1):234-778, 2002. 2 +[2] Lin Chen, Xilin Wei, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Bin Lin, Zhenyu Tang, et al. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325, 2024. 2, 3 +[3] Xiuyuan Chen, Yuan Lin, Yuchen Zhang, and Weiran Huang. Autoeval-video: An automatic benchmark for assessing large vision language models in open-ended video question answering. arXiv preprint arXiv:2311.14906, 2023. 2 +[4] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 1, 2, 3, 7 +[5] Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024. 2 +[6] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pages 5842-5850, 2017. 2, 3 +[7] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 3 +[8] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2 +[9] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-last-interleave: + +Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 3, 7 +[10] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2 +[11] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023. 1, 2 +[12] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 2 +[13] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043, 2023. 2 +[14] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. In European Conference on Computer Vision, pages 323–340. Springer, 2025. 1 +[15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2 +[16] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 2 +[17] Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093, 2023. 2 +[18] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. 2 +[19] Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 1, 2020. 1 +[20] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 527-544. Springer, 2016. 6 +[21] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence, 42(2):502-508, 2019. 2, 3 + +[22] Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, TatSeng Chua, Yueting Zhuang, and Siliang Tang. Momentor: Advancing video large language model with fine-grained temporal reasoning. arXiv preprint arXiv:2402.11435, 2024. 2 +[23] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 1 +[24] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 2 +[25] Fangxun Shu, Lei Zhang, Hao Jiang, and Cihang Xie. Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720, 2023. 2 +[26] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464-7473, 2019. 1 +[27] TencentQQ Multimedia Research Team. Video-cam: Advancing video-language understanding with causal cross-attention masks. https://github.com/QQ-MM/Video-CCAM, 2024.4 +[28] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 1, 3, 7 +[29] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Intervid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 2 +[30] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. 1 +[31] Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, et al. Youku-mplug: A 10 million large-scale chinese video-language dataset for pre-training and benchmarks. arXiv preprint arXiv:2306.04362, 2023. 2 +[32] Mingze Xu, Mingfei Gao, Zhe Gan, Hong-You Chen, Zhengfeng Lai, Haiming Gang, Kai Kang, and Afshin Dehghan. Slowfast-llava: A strong training-free baseline for video large language models. arXiv preprint arXiv:2407.15841, 2024. 2 +[33] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2 + +[34] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024. 1, 3, 7 +[35] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. 1, 2 +[36] Zijia Zhao, Haoyu Lu, Yuqi Huo, Yifan Du, Tongtian Yue, Longteng Guo, Bingning Wang, Weipeng Chen, and Jing Liu. Needle in a video haystack: A scalable synthetic framework for benchmarking video mllms. arXiv preprint arXiv:2406.09367, 2024. 2 +[37] Yuan Zhi, Zhan Tong, Limin Wang, and Gangshan Wu. Mgsampler: An explainable sampling strategy for video action recognition. In Proceedings of the IEEE/CVF International conference on Computer Vision, pages 1513-1522, 2021. 7, 8 +[38] Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264, 2024. 2 +[39] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 2 + +# $\mathbf{SF^{2}T}$ : Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding + +Supplementary Material + +In this supplementary material, Section A presents $\mathrm{SF^2T}$ 's performance on video caption tasks and additional exemplary visualizations of the attention map, while Section B provides more details about FineVidBench. + +# A. More Results and Cases + +In addition to FineVidBench and public video understanding benchmarks, we also evaluated the video caption task (Table 1) using GPT-4o mini, assessing fluency, relevance, informativeness, and correctness, with a maximum score of 40. The results show that incorporating $\mathrm{SF^2T}$ improves performance, highlighting that fine-grained understanding also benefits video captioning. However, after fine-tuning, MiniCPM-V 2.6 produced shorter responses, leading to a decrease in its informativeness score. + +
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
Base33.2032.6122.5329.76
Base+SF2T33.2929.73 ↓30.9930.05
Base(SFT)27.6229.6027.1929.66
Base(SFT)+SF2T30.5031.3128.9431.04
+ +Table 1. Performance on video caption task. The results show that incorporating $\mathrm{SF^2T}$ yields higher scores (except MiniCPM-V 2.6), likely due to its enhanced temporal sensitivity and understanding. + +As shown in Figure 1, we present more attention maps for Qwen2-VL on the Action task, focusing on cases where the model's predictions were corrected after applying $\mathrm{SF}^2\mathrm{T}$ . + +# B. Details of FinevidBench + +# B.1. Question-Answer Templates + +Table 2 delineates the question templates for each task. For the answers, Scene-level tasks include Action task, which are composed of the "visual synonyms" and other verbs; Effect task, which are scripted by researchers based on video content; and Speed task, which offer fixed options: fast, slow, normal, and no speed. Fragment-level tasks encompass Frame Count, with answers ranging from 2 to 6; Meaning of Order, using ordinal numbers as responses; Frame Comparison and Adjust or Not, with responses of Yes, No, and Not sure; and Rearrangement, where the answer is a permutation of N numbers, with N representing the number of input frames. The Question-Answer database is generated through a process of template creation followed by iterative refinement using GPT-4. For Action and Effect tasks, + +each original video is queried three times using different question formulations. For Speed tasks, one query is conducted for both the original and the speed-altered versions of the video. For Fragment-Level tasks, all five questions are posed for each unique frame count. + +# B.2. Detailed Results + +# - Scene Level + +Table 3 illustrates the types of action effects and examples in the Effect tasks. For the affected objects, common physical attributes and quantities of objects are considered; notably, the positional relationship, spatial distance, and similarity between two objects are examined. Regarding action attributes, the intensity and completeness of the action are evaluated. Special actions include slight movement, multiple-object movements where several affected objects undergo motion, and compound movements involving two or more atomic actions linked in time. Additionally, camera movements and the inclination of the surface on which objects move are assessed. Table 4 presents the results categorized under the Effect classification. Overall, models performed well in Physical Attributes and Action Intensity, likely due to the ability to infer such information by comparing images before and after the action occurs. However, models exhibited subpar performance in Action Completion and Camera Motion. The former suggests a lack of understanding regarding the distinction between completed and incomplete actions in terms of their effects, while the latter is attributable to the inherent variability and complexity of camera movements. For other tasks, the majority of models exhibited moderate performance. + +# - Fragment Level + +Table 5 presents the results for all tasks in the fragment level under varying input frame counts. From the results, we can observe that except for Video-CCAM, the models' ability to count frames significantly declines as the frame count increases. Regarding the understanding of order concepts, most models show a clear upward trend, except for ShareGPT4Video. Models generally perform well on the frame comparison task, likely due to extensive training with image-text pairs. Since the input consistently involves two frames, the results show no significant variation, as expected. For Rearrangement, all results hover around random values, suggesting that while models recognize incorrect sequence orders, they cannot correct them, indicating a failure to grasp the dynamic processes of videos truly. + +![](images/23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg) +Figure 1. Four exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - $\mathrm{SF^2T}$ applied. As highlighted by the red boxes, applying $\mathrm{SF^2T}$ enables the model to better focus on action execution areas and interacting objects, while also predicting the direction of motion. + +
TasksQuestion
Scene LevelActionWhich activity can be seen in the video?
EffectAfter the action takes place, what changes occur to the object?
During the process of the action, what changes occur to the object?
After the action takes place, what changes occur in the field of vision?
SpeedWhat is the rate of movement in the video?
Fragment LevelFrame CountCould you please tell me how many frames I have inputted?
Meaning of OrderIn the sequence of frames provided, on which frame does the object first appear?
In the sequence of frames provided, on which frame does the object last appear?
In the sequence of frames provided, in which frames does the object exist?
Frame ComparisonAre the two frames I provided exactly the same?
Adjust or NotThese frames are all from the same video and capture the dynamic process of an action. The order of these frames may have been mixed up. Do we need to rearrange them to match the normal execution sequence of the action?
RearrangementThese frames are all from the same video and depict the dynamic process of an action. The order of these frames may have been mixed up. Based on the connections between the image frames, which of the following options represents the most appropriate sequence?
+ +Table 2. Question templates authored by researchers undergo revision by GPT-4o, which rephrases them to maintain the original intent while introducing varied sentence structures and vocabulary. + +
Effect TypeExamples
Object PropertiesPhysical PropertiesWhat modifications occur to the wafer stick as a result of the action? +A. Not sure B. Nothing happened C. It broke D. It deformed
QuantityOnce the action occurs, what changes are made to the mugs? +A. There are about 5 or 6 mugs here B. There are about 1 or 2 mugs here +C. There are about 3 or 4 mugs here D. Not sure
Object RelationshipsPositionWhat adjustments take place in the egg following the action? +A. An object appeared on top of it B. An object appeared in front of it +C. An object appeared inside it D. An object appeared behind it
DistanceWhat changes happen to the chili and the cucumber after the action is performed? +A. They grew more distant B. It's unclear +C. They came nearer D. Their separation remained consistent
SimilarityWhat adjustments take place in the box following the action? +A. One thing appeared above it +B. Several things appeared above it, and they looked different from each other +C. Not sure +D. Several things appeared above it, and they looked similar to each other
Action PropertiesIntensityWhat alterations are observed in the paper cups after the action is taken? +A. Not sure B. It collapsed C. It broke D. It remained standing
CompletionAfter the action is done, what modifications occur to the onion? +A. It appears unchanged from how it was initially +B. Something was visible at the back of it +C. An item appeared on its surface +D. Something was detected below it
Special ActionsSlight MovementWhat adjustments take place in the shower pouf during the action? +A. I'm uncertain B. It dropped to the ground C. It was nearly at rest D. It ascended
Mutiple-ObjectWhat happens to the two chargers while the action is executed? +A. They crossed paths B. They impacted each other +C. They proceeded in the same direction D. It's unclear
CompoundDuring the process of action, what modifications are observed in the plate? +A. It fell after leaving the hand and did not come back +B. It was continuously held without any separation +C. It was detached from the hand but later reattached +D. Unclear
OthersCamera movementWhat alterations are evident in the flower while the action is carried out? +A. It appeared to move to the right in view B. It appeared to ascend in view +C. It appeared to move to the left in view D. I can't determine
Surface InclinationAfter the action is taken, what changes are noticed in the cup? +A. It was stationary on a tilted surface B. It was stationary on a horizontal surface +C. Not sure D. It rolled down a sloped surface
+ +Table 3. Types of Effect Task + +
Effect Type (Random: 25.00)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4-VideoVideo-CCAMAvg.
Object PropertiesPhysical Properties44.2049.2852.1760.8747.5463.4852.92
Quantity33.3347.6256.1958.1041.9060.9549.68
Object RelationshipsPosition41.0351.2849.2354.3640.3150.3647.76
Distance39.5646.6740.8940.4440.4448.4442.74
Similarity42.8649.5247.6252.3838.1059.0548.25
Action PropertiesIntensity40.2750.6753.3361.3352.5362.1353.38
Completion39.3143.6838.8535.6348.0534.0239.92
Special ActionsSlight Movement47.9243.7541.6772.9235.4254.5849.38
Multiple-Object50.0060.6776.6766.6740.6758.6758.89
Compound48.1544.4451.1152.5935.5653.3347.53
OthersCamera Movement33.3322.2228.8926.6732.2228.8928.70
Surface Inclination28.5749.5258.5760.4841.4351.4348.33
+ +Table 4. The results of the Effect task, dissected into more granular categories. Overall, Qwen2-VL achieved the best results, with Video-CCAM closely following. Notably, models exhibit suboptimal performance in distinguishing completed from incomplete actions, indicating a lack of ability to associate actions with the resulting state changes of objects. + +
Input(Random)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4VideoVideo-CCAM
3q125.0020.3393.8242.8697.2560.9914.18
q225.0019.2348.9035.7129.1276.1538.35
q333.3346.9680.6671.2771.8288.4166.34
q433.3369.2365.3881.5480.0075.5580.06
q525.0023.8523.0833.0827.6923.6823.36
4q125.0019.7790.6639.8996.6316.788.96
q225.0024.1660.6741.0133.1565.4243.65
q333.3358.7678.5376.8477.4087.2363.63
q433.3374.4279.8593.8095.3587.5094.46
q525.0019.3814.7324.8120.9323.1022.94
5q125.0017.9886.447.4596.050.0047.61
q225.0028.8159.8950.2837.8541.0055.24
q333.3355.6867.6180.1174.4389.6964.83
q433.3382.8184.3894.5396.8891.5596.49
q525.0018.7516.4122.6618.7523.2923.92
+ +Table 5. The results of all tasks in Fragment-Level under varying input frame counts. Questions q1 through q5 correspond to Frame Count, Meaning of Order, Frame Comparison, Adjust or Not, and Rearrangement, respectively. \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07745/images/0319bb62a6c7c45a00954b86bf7d0f3bcf0e06eb20112b16245d801ed8821d52.jpg b/data/2025/2504_07xxx/2504.07745/images/0319bb62a6c7c45a00954b86bf7d0f3bcf0e06eb20112b16245d801ed8821d52.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0640036ef6d75b1ba07c20e75b8943a94f92fbf2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/0319bb62a6c7c45a00954b86bf7d0f3bcf0e06eb20112b16245d801ed8821d52.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0effb3ce2ab2d2732f4bb143a297a9f68a28d33e4ca438e76fdd6f9be423bca2 +size 344200 diff --git a/data/2025/2504_07xxx/2504.07745/images/0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg b/data/2025/2504_07xxx/2504.07745/images/0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8d6003784b605d0f5e5750fecdcde02e5518cdde --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6fcf450c0c0295d74313d7d043ddde2626b8a0ed3b5276a8e86a738af10fa724 +size 2740 diff --git a/data/2025/2504_07xxx/2504.07745/images/19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg b/data/2025/2504_07xxx/2504.07745/images/19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..18b9f8dee32e6be4a7b3b7085f79212e40ca3463 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18d32be7a43796d757647f7bc269c19bf4856d134436884b377b84e0e7e48b3a +size 13046 diff --git a/data/2025/2504_07xxx/2504.07745/images/1c017519b4dd297ab87f91be6c92044ca1ad34f27731bb71dfa53be4193d82a8.jpg b/data/2025/2504_07xxx/2504.07745/images/1c017519b4dd297ab87f91be6c92044ca1ad34f27731bb71dfa53be4193d82a8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..06ef2c4d625ec471d3a628d4581a0ce232cdc968 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/1c017519b4dd297ab87f91be6c92044ca1ad34f27731bb71dfa53be4193d82a8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9592aaff70a124658d0b32680fab3685ef67299c0022784ecd64b49e8671a510 +size 163689 diff --git a/data/2025/2504_07xxx/2504.07745/images/217b75d5da1feb710205a3ea17f34a12c93a21948b855474060681bc48f62589.jpg b/data/2025/2504_07xxx/2504.07745/images/217b75d5da1feb710205a3ea17f34a12c93a21948b855474060681bc48f62589.jpg new file mode 100644 index 0000000000000000000000000000000000000000..720fd9c9a76cf902810c62dc0da8bc0aabfd7cd6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/217b75d5da1feb710205a3ea17f34a12c93a21948b855474060681bc48f62589.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16323018b1ee91014e772f590fbed57142b84eb7fd6afe91f7009da63a8a5ddd +size 82028 diff --git a/data/2025/2504_07xxx/2504.07745/images/23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg b/data/2025/2504_07xxx/2504.07745/images/23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..14b7d3e70aa9509ec691ff688f4c60723fcf9ac7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39f6a17f0bedfc8b3a9bbb59b3b7592f5056b207458024c58804b88181823ac0 +size 153319 diff --git a/data/2025/2504_07xxx/2504.07745/images/28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg b/data/2025/2504_07xxx/2504.07745/images/28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3126b66424fde80b61e029d1edcd18e4f025f28f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cc6cdc09507eade5853056c30a35520188b8a0744873e17759ff94cd5f0b6b5 +size 2170 diff --git a/data/2025/2504_07xxx/2504.07745/images/3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg b/data/2025/2504_07xxx/2504.07745/images/3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0245b9ade759d323f161b110a834d43fffec831f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a9ba5dfde94cf0ed855557951a920c620c58e00250e604ff05f713330d213e9 +size 80114 diff --git a/data/2025/2504_07xxx/2504.07745/images/429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg b/data/2025/2504_07xxx/2504.07745/images/429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a0e567fbe2dd0b577047debc571090c56d21ea93 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a513bfbb35e4cefc534a97431b3c38ae13fb91a5836b7dd0b8cbf41897fb06bd +size 1230 diff --git a/data/2025/2504_07xxx/2504.07745/images/467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg b/data/2025/2504_07xxx/2504.07745/images/467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6048544acfeea5ab9db425bfda197c66354c39f7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6f8efd9b67645f79be76ec3439dbc0136467a6a5d28ded2fc28703f6a85c64eb +size 1143 diff --git a/data/2025/2504_07xxx/2504.07745/images/5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg b/data/2025/2504_07xxx/2504.07745/images/5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg new file mode 100644 index 0000000000000000000000000000000000000000..470d7f22de09d0b8c6c259500824fe8fdc134cd1 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74c49ffda2abac78496b34572b250c45c7b0fb6b1e3cdb1029941b4249e179f4 +size 38562 diff --git a/data/2025/2504_07xxx/2504.07745/images/5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg b/data/2025/2504_07xxx/2504.07745/images/5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..009181d2022836ec9a973ac06041e1cb8bbbdb04 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b1b9c6fac4af8f1ea5a434c3e400fe6c349e334368b760a888ec90c61ab6e2c +size 1419 diff --git a/data/2025/2504_07xxx/2504.07745/images/62e0cea972697992ac6e19803b67f81b78c8f447611fcd93268e33d68991c90f.jpg b/data/2025/2504_07xxx/2504.07745/images/62e0cea972697992ac6e19803b67f81b78c8f447611fcd93268e33d68991c90f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0901cdf530547ad065a8435ad0c018fb2ee9b50e --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/62e0cea972697992ac6e19803b67f81b78c8f447611fcd93268e33d68991c90f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:17516a5411e74b23806c865b2bf1f47094a5054a0046db7219e14c269df7f64b +size 134059 diff --git a/data/2025/2504_07xxx/2504.07745/images/64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg b/data/2025/2504_07xxx/2504.07745/images/64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..33e3fc881b5097af9ca3010602fd66132080d748 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc7e0fb1cb236ea3a99f502b87497bc19cd8d959eb3a03dccb87a670a5cc479 +size 27610 diff --git a/data/2025/2504_07xxx/2504.07745/images/6aaa13d9fd9f9561511b88091ada04ad2bb2f209dcbe5e7e1c29745f1c9b4178.jpg b/data/2025/2504_07xxx/2504.07745/images/6aaa13d9fd9f9561511b88091ada04ad2bb2f209dcbe5e7e1c29745f1c9b4178.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f1df080ab995adf81cf2ee13f25a16484b220984 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/6aaa13d9fd9f9561511b88091ada04ad2bb2f209dcbe5e7e1c29745f1c9b4178.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5bad2488feca326b944ef70ddec8c2b0a87b4e744fbd4c7727f9b7d790f491f +size 9032 diff --git a/data/2025/2504_07xxx/2504.07745/images/73ded6200c277082dc2f10323cd0e1a1f5fb0713d2236a09f24da8bb6447951b.jpg b/data/2025/2504_07xxx/2504.07745/images/73ded6200c277082dc2f10323cd0e1a1f5fb0713d2236a09f24da8bb6447951b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5f7e02adcfdd6d6716a712a2687c7e80bee2bae4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/73ded6200c277082dc2f10323cd0e1a1f5fb0713d2236a09f24da8bb6447951b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9062a88f7d3c16a2ca0519420cca1d86a3a18b931d7fabab5bf05b1f52bae084 +size 116218 diff --git a/data/2025/2504_07xxx/2504.07745/images/887a2fb3f507996fa3a3946a0c447d7b2e1125f8a8a144abe6869e78185ea560.jpg b/data/2025/2504_07xxx/2504.07745/images/887a2fb3f507996fa3a3946a0c447d7b2e1125f8a8a144abe6869e78185ea560.jpg new file mode 100644 index 0000000000000000000000000000000000000000..19a93d04b3f40328fc66c51e451c792c57bf01d8 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/887a2fb3f507996fa3a3946a0c447d7b2e1125f8a8a144abe6869e78185ea560.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bbb6c26ebc06c4d39625320fd55f3101a019f7873adf3c515a0cd69edd8834b +size 46309 diff --git a/data/2025/2504_07xxx/2504.07745/images/93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg b/data/2025/2504_07xxx/2504.07745/images/93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e831f8d2b443cb7552a1ebaf1c5c5ade420e4b5e --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:979c2cd5c551a8b7ce4d37cb230b2f8a83fa544bd2b01041a47a3392abd0b8f0 +size 36291 diff --git a/data/2025/2504_07xxx/2504.07745/images/ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg b/data/2025/2504_07xxx/2504.07745/images/ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c0297c9eac10176b6b8f77988e8f96efbe674110 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e857191bfa72c9f9319b30cae5bdff7e22ff70bd976d795b456d9300a23db08e +size 11821 diff --git a/data/2025/2504_07xxx/2504.07745/images/b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg b/data/2025/2504_07xxx/2504.07745/images/b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..63f46a9bfca17e5c243f5f581c47152e8f91f388 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bce98556b049990d2d1d66f56dd009cd8b131bac695b4c373d820ecd440d0f7 +size 11726 diff --git a/data/2025/2504_07xxx/2504.07745/images/b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg b/data/2025/2504_07xxx/2504.07745/images/b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg new file mode 100644 index 0000000000000000000000000000000000000000..388b36301001b24110ae6bc50f6b4a0a54bf2a27 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c17020bb411846963611a4553c833539dd7cd2a93f4cf68d16fe683af9e0316 +size 2064 diff --git a/data/2025/2504_07xxx/2504.07745/images/b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg b/data/2025/2504_07xxx/2504.07745/images/b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8a93bfd7379c9439508f2d1034dd9ead18ffac97 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b92471c8973aca0ebe8397898b643fd362838421ea77aa0110ce069807444a24 +size 1485 diff --git a/data/2025/2504_07xxx/2504.07745/images/b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg b/data/2025/2504_07xxx/2504.07745/images/b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1e178bd2325d3e2432c7ad39822309b50646bf75 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79c7e020a9f8d6c2b83bc2f3064e416f3fe4ab9bce446cefe2241268d7b63e82 +size 43843 diff --git a/data/2025/2504_07xxx/2504.07745/images/cd470606075ce8039139134a6a30f3dfda262ecce420c30962c766eb0017936c.jpg b/data/2025/2504_07xxx/2504.07745/images/cd470606075ce8039139134a6a30f3dfda262ecce420c30962c766eb0017936c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a3e4c423db91f9d4d372b5f69066d311c8ee399b --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/cd470606075ce8039139134a6a30f3dfda262ecce420c30962c766eb0017936c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05c8b01fc7439bc93122e88615c32e0c6baecc17dd3c56f73a8d8336eb20bf8f +size 191240 diff --git a/data/2025/2504_07xxx/2504.07745/images/d1cdef868477757ac87ddd2dcf9068ab8d5ac5713f613471b1f47720544113eb.jpg b/data/2025/2504_07xxx/2504.07745/images/d1cdef868477757ac87ddd2dcf9068ab8d5ac5713f613471b1f47720544113eb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..87a311a3ed5f4504ca4d18a44dbf9c1abbcc146d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/d1cdef868477757ac87ddd2dcf9068ab8d5ac5713f613471b1f47720544113eb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25c3693e04efea3361cf52f967f9ec3be5faba416c3d1154ee5b239286347703 +size 49924 diff --git a/data/2025/2504_07xxx/2504.07745/images/d8920c8e1b5e723a0872e8b610991792488d583beccb64d01b2c1a9bfb280fac.jpg b/data/2025/2504_07xxx/2504.07745/images/d8920c8e1b5e723a0872e8b610991792488d583beccb64d01b2c1a9bfb280fac.jpg new file mode 100644 index 0000000000000000000000000000000000000000..97a126524e6122274328a2aca5ba55aec69acf9c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/d8920c8e1b5e723a0872e8b610991792488d583beccb64d01b2c1a9bfb280fac.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75e1589516e437b1d480f1c72ac7dcb6257b3883c200450783450aeee0d81d10 +size 12924 diff --git a/data/2025/2504_07xxx/2504.07745/images/e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg b/data/2025/2504_07xxx/2504.07745/images/e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c1715a3857a67c26f2a96c84b65545b786313253 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e4ded7cea62fc91f8eb32762cdf2506644fd19d99142e71a092e6b3ae3389dd +size 11601 diff --git a/data/2025/2504_07xxx/2504.07745/images/e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg b/data/2025/2504_07xxx/2504.07745/images/e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b44fee0f16039fc399f32f681db602968a9a90b9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4f5a147c9dfa9829ad2a0660211cb8610fa094de748e24936b6e7d60908bf47 +size 1024 diff --git a/data/2025/2504_07xxx/2504.07745/images/e649ab7b72444c37363694726d639ac3bbdb25a6eedefd741ef6f75f8da50a71.jpg b/data/2025/2504_07xxx/2504.07745/images/e649ab7b72444c37363694726d639ac3bbdb25a6eedefd741ef6f75f8da50a71.jpg new file mode 100644 index 0000000000000000000000000000000000000000..96cb9b8f4d6e47a259cde98f1783e33ac663e652 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/e649ab7b72444c37363694726d639ac3bbdb25a6eedefd741ef6f75f8da50a71.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fdb3c52bdf548bb41e5d7a6eecca40efb5898171f43aa180c851c5f7d0bd2f01 +size 30146 diff --git a/data/2025/2504_07xxx/2504.07745/images/fa2958404e4aaaeb3d53d7c99de2d0fe6a0724dd0390a75cfc19c30ba10f8531.jpg b/data/2025/2504_07xxx/2504.07745/images/fa2958404e4aaaeb3d53d7c99de2d0fe6a0724dd0390a75cfc19c30ba10f8531.jpg new file mode 100644 index 0000000000000000000000000000000000000000..681dee59ed1d2c8ab2cd3d601a7024342b23a430 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/fa2958404e4aaaeb3d53d7c99de2d0fe6a0724dd0390a75cfc19c30ba10f8531.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bd5f3cbff3958e2e6d08a353bf828d11564f9de05be5ba7bf57e511eb11d935 +size 33795 diff --git a/data/2025/2504_07xxx/2504.07745/images/fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg b/data/2025/2504_07xxx/2504.07745/images/fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f1b7b448e2de5503e047c8239cae7916ee497b7d --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/images/fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11c827488743adc25bd8a97febdb5c74059a7f8808784d5374392cbe62daa6ac +size 13618 diff --git a/data/2025/2504_07xxx/2504.07745/layout.json b/data/2025/2504_07xxx/2504.07745/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ce020970f99b4a594b1b3136d7ed663e19aff23c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07745/layout.json @@ -0,0 +1,8775 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 67, + 102, + 542, + 139 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 102, + 542, + 139 + ], + "spans": [ + { + "bbox": [ + 67, + 102, + 542, + 139 + ], + "type": "inline_equation", + "content": "\\mathbf{SF}^2 \\mathbf{T}" + }, + { + "bbox": [ + 67, + 102, + 542, + 139 + ], + "type": "text", + "content": ": Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "spans": [ + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": "Yangliu Hu" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Zikai Song" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1\\dagger}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Na Feng" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Yawei Luo" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Junqing Yu" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Yi-Ping Phoebe Chen" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": ", Wei Yang" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1\\dagger}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": "Huazhong University of Science and Technology " + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": "Zhejiang University " + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 62, + 160, + 545, + 193 + ], + "type": "text", + "content": "La Trobe University" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 154, + 194, + 457, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 154, + 194, + 457, + 206 + ], + "spans": [ + { + "bbox": [ + 154, + 194, + 457, + 206 + ], + "type": "text", + "content": "{huyangliu,skyesong,fengna,yjqing,weiyangcs}@hust.edu.cn" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 175, + 209, + 432, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 209, + 432, + 220 + ], + "spans": [ + { + "bbox": [ + 175, + 209, + 432, + 220 + ], + "type": "text", + "content": "yaweiluo@zju.edu.cn phoebe.chen@latrobe.edu.au" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 151, + 247, + 200, + 260 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 247, + 200, + 260 + ], + "spans": [ + { + "bbox": [ + 151, + 247, + 200, + 260 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "spans": [ + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "text", + "content": "Video-based Large Language Models (Video-LLMs) have witnessed substantial advancements in recent years, propelled by the advancement in multi-modal LLMs. Although these models have demonstrated proficiency in providing the overall description of videos, they struggle with fine-grained understanding, particularly in aspects such as visual dynamics and video details inquiries. To tackle these shortcomings, we find that fine-tuning Video-LLMs on self-supervised fragment tasks, greatly improve their fine-grained video understanding abilities. Hence we propose two key contributions: (1) Self-Supervised Fragment Fine-Tuning " + }, + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "inline_equation", + "content": "(SF^2 T)" + }, + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "text", + "content": ", a novel effortless fine-tuning method, employs the rich inherent characteristics of videos for training, while unlocking more fine-grained understanding ability of Video-LLMs. Moreover, it relieves researchers from labor-intensive annotations and smartly circumvents the limitations of natural language, which often fails to capture the complex spatiotemporal variations in videos; (2) A novel benchmark dataset, namely FineVidBench, for rigorously assessing Video-LLMs' performance at both the scene and fragment levels, offering a comprehensive evaluation of their capabilities. We assessed multiple models and validated the effectiveness of " + }, + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "inline_equation", + "content": "SF^2 T" + }, + { + "bbox": [ + 54, + 285, + 297, + 583 + ], + "type": "text", + "content": " on them. Experimental results reveal that our approach improves their ability to capture and interpret spatiotemporal details." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 605, + 135, + 617 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 135, + 617 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 135, + 617 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 624, + 296, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 624, + 296, + 696 + ], + "spans": [ + { + "bbox": [ + 55, + 624, + 296, + 696 + ], + "type": "text", + "content": "Large Language Models (LLMs) have showcased significant emergent capabilities, such as in-context learning [19], instruction-following [23], and chain-of-thought reasoning [30], driven by expansive datasets and advanced model architectures. Extending these advancements, Video-LLMs through mechanisms like pooling or query aggregation" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 328, + 247, + 541, + 430 + ], + "blocks": [ + { + "bbox": [ + 328, + 247, + 541, + 430 + ], + "lines": [ + { + "bbox": [ + 328, + 247, + 541, + 430 + ], + "spans": [ + { + "bbox": [ + 328, + 247, + 541, + 430 + ], + "type": "image", + "image_path": "5181414074d914e281c8b31ab29fd933ae170f5b1996bf40ca9a481d714dd227.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "lines": [ + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "text", + "content": "Figure 1. Performance w/ and w/o " + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "inline_equation", + "content": "\\mathbf{SF}^2\\mathbf{T}" + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "text", + "content": ". We evaluated four advanced Video-LLMs w/ and w/o " + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "text", + "content": " on our proposed FineVidBench with two baselines: (1) Base: performance without any fine-tuning (blue dashed), and (2) Base (SFT): performance with supervised fine-tuning (red dashed). After applying " + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 437, + 555, + 515 + ], + "type": "text", + "content": ", all models showed significant improvements (solid blue and red), underscoring its broad effectiveness." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 542, + 555, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 542, + 555, + 626 + ], + "spans": [ + { + "bbox": [ + 313, + 542, + 555, + 626 + ], + "type": "text", + "content": "across numerous visual tokens, have broadened the scope of LLMs to encompass video information processing [11, 14, 35]. This evolution markedly advances their potential for in-depth real-world comprehension, opening applications in intelligent surveillance, virtual reality, and autonomous driving, further enriching the landscape of video analytics and interpretation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 556, + 715 + ], + "type": "text", + "content": "Various Video-LLMs, exemplified by GPT4-V, VideoLaMA 2 [4], MiniCPM-V [34], and Qwen2-VL [28], have been crafted by leading corporations and research institutions, demonstrating proficiency in capturing the overarching content of videos. When adapting to new videos and tasks, they predominantly rely on Supervised FineTuning (SFT) [26] or Reinforcement Learning from Hu" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 208, + 37, + 559 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 208, + 37, + 559 + ], + "spans": [ + { + "bbox": [ + 14, + 208, + 37, + 559 + ], + "type": "text", + "content": "arXiv:2504.07745v1 [cs.CV] 10 Apr 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "spans": [ + { + "bbox": [ + 70, + 703, + 151, + 713 + ], + "type": "text", + "content": "† Corresponding authors" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "content": "man Feedback (RLHF) [39], both of which are heavily contingent upon extensive manual annotation. This dependence poses several key problems: (1) it necessitates substantial human resources, particularly highly trained annotators; (2) the inherent complexity of video content and task demands frequently introduces inconsistencies and subjectivity, rendering the maintenance of high-quality annotations particularly arduous; and (3) subtle temporal variations across video frames are challenging to articulate with precision, often yielding generalized descriptions that constrain the Video-LLMs' potential. Consequently, existing Video-LLMs struggle with fine-grained video understanding tasks, particularly in aspects such as visual dynamics (e.g., motion patterns, object interactions) and video details inquiries (e.g., positional changes, detail variations)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "spans": [ + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "text", + "content": "To address these challenges, we observe that finetuning Video-LLMs with self-supervised fragment tasks, by \"fragment\" we mean temporal frame level specifications of the video, could improve the model's sensitivity to spatiotemporal scene-level details (related to video contents). Driven by this, we introduce the Self-supervised Fragment Fine-Tuning " + }, + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "inline_equation", + "content": "(\\mathrm{SF}^2\\mathrm{T})" + }, + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "text", + "content": ", a effortless fine-tuning strategy for Video-LLMs that help to improve the fine-grained video understanding. " + }, + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 251, + 295, + 430 + ], + "type": "text", + "content": " consists of five fragment-level tasks—Counting, Consistency Verification, Localization, Disorder Detection and Rearrangement—that automatically generate labels from various spatiotemporal perspectives. This approach maximizes the use of frame-level information while minimizing reliance on complex human instructions and annotations." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "spans": [ + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "text", + "content": "Moreover, to evaluate the fine-grained visual dynamic perception of Video-LLMs and fully demonstrate the effectiveness of our " + }, + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "text", + "content": ", we present the FineVidBench, a novel benchmark. FineVidBench comprises 910 videos and 22,718 question-answer pairs, with videos sourced from diverse public datasets, including Something-Something V2 (SSv2) [6], Moments in Time (MiT) [21], etc. The question-answer pairs are auto-generated in single-choice format, incorporating distractors to increase testing difficulty. We evaluated several notable Video-LLMs developed in recent years, and find they generally fail to understand the execution sequence of actions and struggling to grasp fine-grained spatiotemporal information. While after fine-tuning with " + }, + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 431, + 295, + 622 + ], + "type": "text", + "content": ", the Video-LLMs better recognize spatiotemporal details, leading to a holistic and marked improvement in fine-grained understanding." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 633, + 142, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 633, + 142, + 645 + ], + "spans": [ + { + "bbox": [ + 55, + 633, + 142, + 645 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 653, + 294, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 294, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 294, + 714 + ], + "type": "text", + "content": "Video-LLMs Finetuning Video-LLMs are primarily finetuned by adjusting the parameters of small, trainable adapters for task adaptation, without changing the entire model, saving resources and enhancing efficiency. The connective adapter (e.g., MLP/Linear Layer [15], Q" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 72, + 553, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 262 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 262 + ], + "type": "text", + "content": "former [10]) links the Video Embedder and LLM, aligning video embeddings with LLM input tokens, while insertive adapters (e.g., LoRA [8]) are directly integrated into the LLM to modify its behavior. Most Video-LLMs combine both types of adapters and typically use multi-stage finetuning [4, 11, 13, 24, 35]. First, the model learns to establish relationships between images, videos, and text using large-scale multimodal datasets [1, 2, 29, 31]. In the second stage, the model is fine-tuned with an curated instruction-following dataset [11, 17, 18]. Besides, there are full finetuning, which updates all LLM parameters with a lower learning rate [25, 33], and zero-shot models, which transforms the video task into a text task, typically relying on a powerful LLM [32]. However, annotating video data remains a labor-intensive and time-consuming task, particularly for long videos or those involving complex actions." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 264, + 553, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 264, + 553, + 467 + ], + "spans": [ + { + "bbox": [ + 313, + 264, + 553, + 467 + ], + "type": "text", + "content": "Benchmarks on Video-LLMs Currently, many studies [3, 5, 38] focus on evaluating the temporal perception capabilities of Video-LLMs. MVBench [12] designs 20 tasks from temporal and spatial perspectives, and Tempcompass [16] introduces 5 temporal aspects and 4 task formats. VN-Bench [36] decouples video content from the QA pairs by inserting irrelevant images or text \"needles\" into the original video. Moment-10M [22] has constructed a large-scale dataset on temporal localization tasks. However, as illustrated in Table 1, these studies often focus on gathering diverse videos or evaluating the models' performance with long videos, while somewhat neglecting the models' ability to perform fine-grained perception of temporal details. To address this gap, FineVidBench breaks videos into multiple sets of frames and generates annotations from diverse spatiotemporal perspectives, introducing novel evaluation methods for fine-grained understanding." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 316, + 475, + 553, + 596 + ], + "blocks": [ + { + "bbox": [ + 316, + 475, + 553, + 596 + ], + "lines": [ + { + "bbox": [ + 316, + 475, + 553, + 596 + ], + "spans": [ + { + "bbox": [ + 316, + 475, + 553, + 596 + ], + "type": "table", + "html": "
BenchmarksVideo num.QA num.Input ChangeTemporal DiversityFine-Grained EvaluationHierarchical Test
Video-MME9002700XXXX
TempCompass4107540XX
VN bench-1350XX
Moment-10M64.9k10.4MXXXX
AutoEval-Video327327XXXX
MV bench36414000XXX
MLVU13342593XXXX
FineVidBench91022,718
", + "image_path": "fa2958404e4aaaeb3d53d7c99de2d0fe6a0724dd0390a75cfc19c30ba10f8531.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 604, + 553, + 638 + ], + "lines": [ + { + "bbox": [ + 313, + 604, + 553, + 638 + ], + "spans": [ + { + "bbox": [ + 313, + 604, + 553, + 638 + ], + "type": "text", + "content": "Table 1. Comparison with related benchmarks. Our approach offers significant advantages in input formats, evaluation methods, granularity, and temporal diversity." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 657, + 465, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 657, + 465, + 670 + ], + "spans": [ + { + "bbox": [ + 313, + 657, + 465, + 670 + ], + "type": "text", + "content": "3. FineVidBench Benchmark" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 677, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 553, + 713 + ], + "type": "text", + "content": "It is broadly recognized that Video-LLMs struggle with fine-grained video understanding tasks, yet no comprehensive benchmarks exist to thoroughly investigate this issue." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 120 + ], + "type": "text", + "content": "To address this gap, we introduce FineVidBench, a multidimensional, fine-grained evaluation framework specifically designed to assess and improve the overall capabilities of Video-LLMs." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 129, + 141, + 141 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 129, + 141, + 141 + ], + "spans": [ + { + "bbox": [ + 55, + 129, + 141, + 141 + ], + "type": "text", + "content": "3.1. Construction" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 147, + 295, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 147, + 295, + 206 + ], + "spans": [ + { + "bbox": [ + 55, + 147, + 295, + 206 + ], + "type": "text", + "content": "Data collection We selected videos from various public datasets, including SS-v2 [6], MiT [21], and Ego4D [7], with a particular emphasis on temporally-sensitive content, to focus the model on the entire video sequence rather than individual frames." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "spans": [ + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "text", + "content": "Action categorization As shown in Figure 2, we compiled 52 actions, categorizing them into 3 types based on intraclass variance. The distribution varies significantly: \"Distinctive Actions\" " + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "inline_equation", + "content": "(39\\%)" + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "text", + "content": " are easily recognizable, encompassing a total of 36 actions. \"Non-typical Actions\" " + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "inline_equation", + "content": "(57\\%)" + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "text", + "content": " refer to flexible actions with no clear defining characteristics, spanning 14 types. The broad diversity and complexity in this category require more extensive video coverage to adequately capture the range of expressions and variations. \"Slight Movements\" " + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "inline_equation", + "content": "(4\\%)" + }, + { + "bbox": [ + 55, + 207, + 295, + 351 + ], + "type": "text", + "content": " represent subtle actions, such as \"hold\" and \"show\", which are difficult to detect with the naked eye and constitute a small proportion." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 351, + 295, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 351, + 295, + 422 + ], + "spans": [ + { + "bbox": [ + 55, + 351, + 295, + 422 + ], + "type": "text", + "content": "Data augmentation The original videos were augmented using frame interpolation and skipping techniques for speed transformation, along with a motion-salient area sampling algorithm to capture dynamic motion. This process generated speed-varied versions and multiple sets of keyframes for each video." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 423, + 295, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 423, + 295, + 533 + ], + "spans": [ + { + "bbox": [ + 55, + 423, + 295, + 533 + ], + "type": "text", + "content": "Statistics With our augmentation strategy, FineVidBench includes 910 videos, 1,820 speed-variant videos, and 2,670 sets of keyframes enriched with dynamic visual information. Building on this, we generated 22,718 QA pairs from the video content through a combination of automated processes and manual review. The quality assurance process involved rigorous cross-verification, where reviewers checked each QA pair for accuracy and contextual relevance, making corrections to ensure high quality." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 540, + 205, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 540, + 205, + 553 + ], + "spans": [ + { + "bbox": [ + 55, + 540, + 205, + 553 + ], + "type": "text", + "content": "3.2. Benchmarking Dimensions" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 558, + 295, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 558, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 558, + 295, + 715 + ], + "type": "text", + "content": "As shown in Figure 3, FineVidBench encompasses both scene-level and fragment-level evaluations. The scene-level evaluation assesses both original and speed-adjusted videos across three dimensions: (1) Action, which evaluates the model's holistic understanding of video content. To increase difficulty, \"Visual Synonyms\" are added as distractors, requiring VideoLLM to distinguish visually similar actions with subtle differences, a challenge common in real-world scenarios. (2) Effect, which focuses on the model's comprehension of the visual changes resulting from actions. This understanding is essential for revealing object properties and interpreting complex dynamic scenes, and could significantly enhance the reasoning capabilities of Video-" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 335, + 76, + 536, + 281 + ], + "blocks": [ + { + "bbox": [ + 335, + 76, + 536, + 281 + ], + "lines": [ + { + "bbox": [ + 335, + 76, + 536, + 281 + ], + "spans": [ + { + "bbox": [ + 335, + 76, + 536, + 281 + ], + "type": "image", + "image_path": "93dc7847bb0b3fc5fb2f4ee5b4155ff76f57da3f8eb0b56e88bc6dc5ef2f1340.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 291, + 555, + 348 + ], + "lines": [ + { + "bbox": [ + 313, + 291, + 555, + 348 + ], + "spans": [ + { + "bbox": [ + 313, + 291, + 555, + 348 + ], + "type": "text", + "content": "Figure 2. We show the action semantics and their respective proportions in FineVidBench. Distinctive Action: easily recognizable actions. Non-typical Action: flexible actions with no clear characteristics, like \"put\" and \"move.\" Slight Movement: subtle actions, such as \"hold\" and \"show,\" difficult to detect with the naked eye." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 366, + 555, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 366, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 313, + 366, + 555, + 437 + ], + "type": "text", + "content": "LLMs and LLM-aided agents. (3) Speed, which tests the model's sensitivity to changes in video speed and its capability to maintain consistent understanding across varying speeds, with slow motion revealing hidden details and fast motion obscuring them. This capability is crucial for optimizing the model's performance across diverse scenarios." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 438, + 556, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 438, + 556, + 654 + ], + "spans": [ + { + "bbox": [ + 313, + 438, + 556, + 654 + ], + "type": "text", + "content": "For fragment-level evaluation, We've designed a structured evaluation format for video dynamic keyframes, employing a step-by-step inquiry framework: (1) Frame Count: Models are queried on the number of frames in sequences using dynamically refined keyframes to assess counting accuracy. (2) Meaning of Order: Understanding of sequence order is tested by asking about the first or last frames the targets appear in, or the frames they are present. e.g., \"At which frame does the target object first appear?\". (3) Frame Comparison: Two frames are randomly selected from the sequence for visual comparison, with differences varying in size but generally staying within human visual comfort limits. (4) Adjust-or-Not and Rearrangement: These two tasks involve a shuffled sequence of keyframes, and the model is asked to determine whether the order needs adjustment and, if so, how to correct it. They evaluate the model's ability to understand and restore the video's temporal sequence." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 660, + 430, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 660, + 430, + 672 + ], + "spans": [ + { + "bbox": [ + 313, + 660, + 430, + 672 + ], + "type": "text", + "content": "3.3. Benchmark Results" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 555, + 715 + ], + "type": "text", + "content": "We evaluated six of the most advanced open-source models: LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4], Qwen2-VL[28], ShareGPT4Video [2] and" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 70, + 70, + 261, + 240 + ], + "blocks": [ + { + "bbox": [ + 70, + 70, + 261, + 240 + ], + "lines": [ + { + "bbox": [ + 70, + 70, + 261, + 240 + ], + "spans": [ + { + "bbox": [ + 70, + 70, + 261, + 240 + ], + "type": "image", + "image_path": "b3fdb1142169dfd3731bb1039d8390b91cbe26fcacef82e674e2d0655fa3f0b9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 267, + 70, + 398, + 83 + ], + "lines": [ + { + "bbox": [ + 267, + 70, + 398, + 83 + ], + "spans": [ + { + "bbox": [ + 267, + 70, + 398, + 83 + ], + "type": "text", + "content": "※ Fragment-Level Tests ※" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 267, + 89, + 402, + 233 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 267, + 89, + 356, + 99 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 89, + 356, + 99 + ], + "spans": [ + { + "bbox": [ + 267, + 89, + 356, + 99 + ], + "type": "text", + "content": "① How many frames?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 267, + 102, + 345, + 110 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 102, + 345, + 110 + ], + "spans": [ + { + "bbox": [ + 267, + 102, + 345, + 110 + ], + "type": "text", + "content": "A. 2 B. 3 C. 4 D. 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 267, + 115, + 391, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 115, + 391, + 125 + ], + "spans": [ + { + "bbox": [ + 267, + 115, + 391, + 125 + ], + "type": "text", + "content": "(2) Which frames show the cup?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 267, + 129, + 380, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 129, + 380, + 138 + ], + "spans": [ + { + "bbox": [ + 267, + 129, + 380, + 138 + ], + "type": "text", + "content": "A. 3,4 B. 2,3,4 C. 2,3 D. 1,2,3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 267, + 142, + 394, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 142, + 394, + 152 + ], + "spans": [ + { + "bbox": [ + 267, + 142, + 394, + 152 + ], + "type": "text", + "content": "(3) Are the two frames the same?" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 267, + 154, + 375, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 154, + 375, + 163 + ], + "spans": [ + { + "bbox": [ + 267, + 154, + 375, + 163 + ], + "type": "text", + "content": "A. Yes, they are exactly the same" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 267, + 165, + 350, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 165, + 350, + 175 + ], + "spans": [ + { + "bbox": [ + 267, + 165, + 350, + 175 + ], + "type": "text", + "content": "B. No, they are different" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 267, + 178, + 364, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 178, + 364, + 189 + ], + "spans": [ + { + "bbox": [ + 267, + 178, + 364, + 189 + ], + "type": "text", + "content": "④ Should I adjust them?" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 267, + 190, + 368, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 190, + 368, + 199 + ], + "spans": [ + { + "bbox": [ + 267, + 190, + 368, + 199 + ], + "type": "text", + "content": "A. Yes, they need adjustment" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 267, + 201, + 378, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 201, + 378, + 209 + ], + "spans": [ + { + "bbox": [ + 267, + 201, + 378, + 209 + ], + "type": "text", + "content": "B. No, they are in the correct order" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 267, + 213, + 402, + 223 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 213, + 402, + 223 + ], + "spans": [ + { + "bbox": [ + 267, + 213, + 402, + 223 + ], + "type": "text", + "content": "⑤ Which shows the correct order?" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 267, + 225, + 383, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 267, + 225, + 383, + 233 + ], + "spans": [ + { + "bbox": [ + 267, + 225, + 383, + 233 + ], + "type": "text", + "content": "A. 1234 B. 2314 C. 3142 D. 4321" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 408, + 91, + 471, + 234 + ], + "blocks": [ + { + "bbox": [ + 408, + 91, + 471, + 234 + ], + "lines": [ + { + "bbox": [ + 408, + 91, + 471, + 234 + ], + "spans": [ + { + "bbox": [ + 408, + 91, + 471, + 234 + ], + "type": "image", + "image_path": "e3c884b85391f03768d80cd1d13ec65d55a292c4da3b34fb5cfd15b2051d709f.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 248, + 555, + 293 + ], + "lines": [ + { + "bbox": [ + 55, + 248, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 55, + 248, + 555, + 293 + ], + "type": "text", + "content": "Figure 3. FineVidBench evaluates videos augmented with speed variations and fragments. Scene-level tests include the following: Action: Tests recognition accuracy amidst distractors like \"Visual Synonyms\". Effect: Assesses the model's ability to identify pre- and post-action changes. Speed: Measures the model's sensitivity to changes in video speed. Fragment-level tests, employing a step-by-step inquiry framework, focus on challenges such as Frame Count, Meaning of Order, Frame Comparison, Adjust-or-Not and Rearrangement." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 480, + 93, + 541, + 233 + ], + "blocks": [ + { + "bbox": [ + 480, + 93, + 541, + 233 + ], + "lines": [ + { + "bbox": [ + 480, + 93, + 541, + 233 + ], + "spans": [ + { + "bbox": [ + 480, + 93, + 541, + 233 + ], + "type": "image", + "image_path": "ad9426eb0e0740c744fe63a8d4e4c7810ffbfbeb6acfc863b32655e01fed85c8.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "bbox": [ + 55, + 314, + 296, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 314, + 296, + 362 + ], + "spans": [ + { + "bbox": [ + 55, + 314, + 296, + 362 + ], + "type": "text", + "content": "Video-CCAM [27], each employing different architectures and training strategies. Table 3 summarizes the results across the eight tasks. We discuss the results from scene-level and fragment-level." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 55, + 372, + 204, + 384 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 372, + 204, + 384 + ], + "spans": [ + { + "bbox": [ + 55, + 372, + 204, + 384 + ], + "type": "text", + "content": "- Scene-level Results and Analysis" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 55, + 388, + 296, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 388, + 296, + 483 + ], + "spans": [ + { + "bbox": [ + 55, + 388, + 296, + 483 + ], + "type": "text", + "content": "Action The scores for this task varied significantly, with models trained in relevant video data—such as Video-CCAM, Qwen2-VL, and VideoLLaMA 2.1—achieving notably higher performance. However, as shown on the left side of Table 2, interference from \"Visual Synonyms\" prevented these models from achieving their full potential, resulting in declines of varying degrees and indicating difficulties in distinguishing visually similar actions." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 55, + 485, + 296, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 485, + 296, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 485, + 296, + 605 + ], + "type": "text", + "content": "Effect All models exhibited average performance on this task, indicating a superficial understanding of aspects such as object attributes, object relationships, and action properties. This task tests the model's ability to grasp how actions affect objects, focusing on causal relationships and temporal reasoning—particularly for actions like \"push\" and \"pull\", which share similar execution flows. The model must distinguish them based on dynamic effects, such as changes in direction and speed, but most models perform moderately in this regard." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": "Speed The results show that all models are insensitive to speed variations, likely because they were not adequately exposed to speed changes during training. Figure 4 shows that models are more sensitive to slow motion than fast playback, and struggled with identifying \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1. This may be due to the loss of coherence in fast-moving video content, while slow-motion videos highlight more distinct details, aiding the model in making accurate judgments." + } + ] + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 334, + 314, + 536, + 462 + ], + "blocks": [ + { + "bbox": [ + 334, + 314, + 536, + 462 + ], + "lines": [ + { + "bbox": [ + 334, + 314, + 536, + 462 + ], + "spans": [ + { + "bbox": [ + 334, + 314, + 536, + 462 + ], + "type": "image", + "image_path": "64fc26c33c1d9ab6da9e7af66481e5d25957d2ee4812fc419ceac98a3dd71b5c.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 470, + 555, + 504 + ], + "lines": [ + { + "bbox": [ + 313, + 470, + 555, + 504 + ], + "spans": [ + { + "bbox": [ + 313, + 470, + 555, + 504 + ], + "type": "text", + "content": "Figure 4. Accuracy across different video speeds. All models are more sensitive to slow-speed videos and struggle to understand \"normal speed\" and \"no speed\", except for VideoLLaMA 2.1." + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "table", + "bbox": [ + 315, + 512, + 561, + 635 + ], + "blocks": [ + { + "bbox": [ + 315, + 512, + 561, + 635 + ], + "lines": [ + { + "bbox": [ + 315, + 512, + 561, + 635 + ], + "spans": [ + { + "bbox": [ + 315, + 512, + 561, + 635 + ], + "type": "table", + "html": "
Video-LLMsActionFrame Number
w/o VSw/ VSAvg.345
LLaVA-NeXT-Video37.3135.0419.3720.3319.7717.98
MiniCPM-V 2.643.3740.1590.3293.8290.6686.44
Video-LLaMA 2.163.2653.9830.1742.8639.897.45
Qwen2-VL68.1856.6296.6597.2596.6396.05
ShareGPT4Video46.9030.8426.3360.9916.780.00
Video-CCAM73.1060.2323.4514.188.9647.61
", + "image_path": "d1cdef868477757ac87ddd2dcf9068ab8d5ac5713f613471b1f47720544113eb.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "table_body" + } + ], + "index": 25 + }, + { + "bbox": [ + 313, + 643, + 555, + 708 + ], + "lines": [ + { + "bbox": [ + 313, + 643, + 555, + 708 + ], + "spans": [ + { + "bbox": [ + 313, + 643, + 555, + 708 + ], + "type": "text", + "content": "Table 2. Left: Accuracy of the Action task with or without \"Visual Synonyms\". It is obvious that the \"Visual Synonyms\" have significantly impacted the model's judgment. Right: Accuracy of the counting task across different frame counts. Except for Video-CCAM, all other models exhibited a decline in performance as the number of frames increased." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 65, + 70, + 545, + 240 + ], + "blocks": [ + { + "bbox": [ + 65, + 70, + 545, + 240 + ], + "lines": [ + { + "bbox": [ + 65, + 70, + 545, + 240 + ], + "spans": [ + { + "bbox": [ + 65, + 70, + 545, + 240 + ], + "type": "table", + "html": "
Video-LLMsParams.Scene-LevelFragment-LevelS-Avg.FG-Avg.A-Avg.
ActionEffectSpeedFCntMoOFCmpAoNRearr
(Random)-25.0025.0025.0025.0025.0033.3333.3325.0025.0028.3327.08
LLaVA-NeXT-Video7B37.3142.6722.3519.3724.0253.7575.4520.6734.1138.6536.95
MiniCPM-V 2.68B43.3752.5619.1390.3256.4275.6676.4918.0938.3563.4054.01
Video-LLaMA 2.17B63.2650.9219.8930.1742.2776.0189.9226.8744.6953.0549.91
Qwen2-VL7B68.1857.1424.6296.6533.3374.5390.7022.4849.9863.5458.45
ShareGPT4Video8B46.9043.8831.7626.3361.0588.4484.8023.3640.8557.1150.82
Video-CCAM9B73.1055.9031.6523.4545.6664.9590.2722.7253.5548.4750.96
", + "image_path": "73ded6200c277082dc2f10323cd0e1a1f5fb0713d2236a09f24da8bb6447951b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 248, + 555, + 283 + ], + "lines": [ + { + "bbox": [ + 55, + 248, + 555, + 283 + ], + "spans": [ + { + "bbox": [ + 55, + 248, + 555, + 283 + ], + "type": "text", + "content": "Table 3. The overall performances of notable Video-LLMs on FineVidBench. FCnt: Frame Count. MoO: Meaning of Order. Fcmp: Frame Comparison. AoN: Adjust or Not. Rearr: Rearrangement. S-Avg.: the average performance of scene-level tasks; FG-Avg.: the average performance of fragment-level tasks. A-Avg.: the average performance of all tasks." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 303, + 223, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 303, + 223, + 316 + ], + "spans": [ + { + "bbox": [ + 55, + 303, + 223, + 316 + ], + "type": "text", + "content": "- Fragment-level Results and Analysis" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 319, + 297, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 319, + 297, + 548 + ], + "spans": [ + { + "bbox": [ + 54, + 319, + 297, + 548 + ], + "type": "text", + "content": "(1) Frame-count accuracy varied significantly across models, with the lower-performing models likely lacking targeted training. The trend shown in the right side of Table 2, where accuracy decreases as frame count increases, highlights the models' insufficient temporal reasoning on longer sequences. (2) ShareGPT4Video and MiniCPM-V 2.6 showed better comprehension in the Meaning-of-Order task, while other models lagged, suggesting a lack of explicit focus on \"order\". (3) Most models excelled in frame comparison due to image-text alignment training. ShareGPT4Video achieved the best performance, owing to its Differential Sliding-Window Captioning (DiffSW) strategy, which emphasizes capturing the changes between frames when generating video descriptions. This also improved its Meaning-of-Order performance. (4) In the sorting task, models generally succeeded in the \"Adjust or Not\" response but performed poorly in the more complex \"Rearrangement\" task, indicating they can detect, but not correct, sequence errors." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 560, + 262, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 560, + 262, + 575 + ], + "spans": [ + { + "bbox": [ + 55, + 560, + 262, + 575 + ], + "type": "text", + "content": "4. Self-supervised Fragment Finetuning" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 582, + 297, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 582, + 297, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 582, + 297, + 715 + ], + "type": "text", + "content": "The above benchmark results show the existing Video-LLMs generally fail to tackle fine-grained video understanding tasks. Videos often contain subtle, complex changes that natural language alone fails to fully capture. The core component of Video-LLMs, LLMs, as generalized pattern recognizers, offers a promising solution. LLMs have the potential to detect and interpret intricate spatiotemporal dynamics that were previously difficult to represent. Given that these changes cannot be directly annotated, using self-supervised learning naturally becomes the solution, bypassing the bottleneck of manual annotation and significantly re" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "spans": [ + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "text", + "content": "ducing labeling costs. Given these factors, we propose the " + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "text", + "content": " to fine-tune Video-LLMs. While we do not expect " + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "text", + "content": " to replace the supervised fine-tuning, instead it's an effortless complementary to SFT. Comparing " + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 303, + 556, + 389 + ], + "type": "text", + "content": " with SFT, they primarily differ in data construction and content focus level, with each method aligned with distinct training objectives as shown in Figure 5." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 395, + 387, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 395, + 387, + 407 + ], + "spans": [ + { + "bbox": [ + 313, + 395, + 387, + 407 + ], + "type": "text", + "content": "4.1. SFT Tasks" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 412, + 555, + 436 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 412, + 555, + 436 + ], + "spans": [ + { + "bbox": [ + 313, + 412, + 555, + 436 + ], + "type": "text", + "content": "We first review the common SFT tasks to set a baseline for comparing our " + }, + { + "bbox": [ + 313, + 412, + 555, + 436 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 437, + 555, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 437, + 555, + 508 + ], + "spans": [ + { + "bbox": [ + 313, + 437, + 555, + 508 + ], + "type": "text", + "content": "General QA on Video Content This method focuses on understanding the main events and context of a video by directly asking questions about its content. While effective for grasping the video's key moments, it lacks finer spatiotemporal details and requires significant human effort to create standardized but constrained answers." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 509, + 556, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 509, + 556, + 629 + ], + "spans": [ + { + "bbox": [ + 313, + 509, + 556, + 629 + ], + "type": "text", + "content": "Frame Description Integration This method typically samples video frames evenly, generates detailed descriptions for each, and integrates them into a cohesive but lengthy summary. While it enhances the model's understanding of continuity and micro-dynamics, it often proves incapable of capturing complex or subtle details that are beyond natural language's scope. Moreover, although frame descriptions can be generated using powerful multi-model LLMs like GPT-4o, significant human effort is still required to review the quality of the generated responses." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 635, + 477, + 649 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 635, + 477, + 649 + ], + "spans": [ + { + "bbox": [ + 313, + 635, + 477, + 649 + ], + "type": "text", + "content": "4.2. Fragment-level Tasks of " + }, + { + "bbox": [ + 313, + 635, + 477, + 649 + ], + "type": "inline_equation", + "content": "\\mathbf{SF}^2\\mathbf{T}" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "content": "SFT tasks require manual annotations, and even automation annotation is labor-intensive and error-prone. To address, we introduce " + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "content": " which generates accurate fragment-level labels accurately. " + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 654, + 556, + 715 + ], + "type": "text", + "content": " comprises five tasks—Counting, Consistency Verification, Localization, Disorder Detection" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 61, + 75, + 292, + 118 + ], + "blocks": [ + { + "bbox": [ + 61, + 75, + 292, + 118 + ], + "lines": [ + { + "bbox": [ + 61, + 75, + 292, + 118 + ], + "spans": [ + { + "bbox": [ + 61, + 75, + 292, + 118 + ], + "type": "image", + "image_path": "19248d16a197d1b96fda68b99bbd8e7350e03c4bad689ecb47f3df3e3a40504b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 75, + 128, + 235, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 128, + 235, + 139 + ], + "spans": [ + { + "bbox": [ + 75, + 128, + 235, + 139 + ], + "type": "text", + "content": "What is the main content of the video?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 90, + 143, + 259, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 90, + 143, + 259, + 168 + ], + "spans": [ + { + "bbox": [ + 90, + 143, + 259, + 168 + ], + "type": "text", + "content": "The video shows a person bowling, including their four-step approach, the smooth release of the ball down the lane, its path toward the pins, and..." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 61, + 183, + 291, + 234 + ], + "blocks": [ + { + "bbox": [ + 61, + 183, + 291, + 234 + ], + "lines": [ + { + "bbox": [ + 61, + 183, + 291, + 234 + ], + "spans": [ + { + "bbox": [ + 61, + 183, + 291, + 234 + ], + "type": "image", + "image_path": "fac7fe9dc31112f7b5a655a9f855c7785110c3e012266174e150ffa294dd3dfb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 75, + 243, + 236, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 243, + 236, + 253 + ], + "spans": [ + { + "bbox": [ + 75, + 243, + 236, + 253 + ], + "type": "text", + "content": "What is the main content of the video?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 92, + 257, + 260, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 92, + 257, + 260, + 299 + ], + "spans": [ + { + "bbox": [ + 92, + 257, + 260, + 299 + ], + "type": "text", + "content": "The video shows a person bowling: (Frame 1) The scene shows a bowling alley... (Frame 2) The player swing the bowling ball... (Frame 4) The bowling ball approaches the pins... (Frame 6) The bowling ball strikes the pins... (Frame 8) All the pins are down." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 263, + 253, + 274, + 266 + ], + "blocks": [ + { + "bbox": [ + 263, + 253, + 274, + 266 + ], + "lines": [ + { + "bbox": [ + 263, + 253, + 274, + 266 + ], + "spans": [ + { + "bbox": [ + 263, + 253, + 274, + 266 + ], + "type": "image", + "image_path": "e51df2bd69d0ad08c8ab3c601cc8ecf9fc84a3cdd0f5275bcb937207f223a3b2.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 78, + 332, + 272, + 390 + ], + "blocks": [ + { + "bbox": [ + 131, + 312, + 220, + 323 + ], + "lines": [ + { + "bbox": [ + 131, + 312, + 220, + 323 + ], + "spans": [ + { + "bbox": [ + 131, + 312, + 220, + 323 + ], + "type": "text", + "content": "Scene-Level Tasks" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 78, + 332, + 272, + 390 + ], + "lines": [ + { + "bbox": [ + 78, + 332, + 272, + 390 + ], + "spans": [ + { + "bbox": [ + 78, + 332, + 272, + 390 + ], + "type": "image", + "image_path": "b06227fb2753f649b91d143258466e2abef9e9f856bc50b35884caefb010def2.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 75, + 401, + 88, + 414 + ], + "blocks": [ + { + "bbox": [ + 75, + 401, + 88, + 414 + ], + "lines": [ + { + "bbox": [ + 75, + 401, + 88, + 414 + ], + "spans": [ + { + "bbox": [ + 75, + 401, + 88, + 414 + ], + "type": "image", + "image_path": "467e71fa5605af2d18a37b665be66f536c52432df6cb57a8237fb077a9b6d1d8.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 93, + 404, + 162, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 404, + 162, + 415 + ], + "spans": [ + { + "bbox": [ + 93, + 404, + 162, + 415 + ], + "type": "text", + "content": "How many frames?" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 93, + 420, + 161, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 420, + 161, + 429 + ], + "spans": [ + { + "bbox": [ + 93, + 420, + 161, + 429 + ], + "type": "text", + "content": "On which frames?" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 93, + 435, + 148, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 435, + 148, + 444 + ], + "spans": [ + { + "bbox": [ + 93, + 435, + 148, + 444 + ], + "type": "text", + "content": "Same frames?" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 93, + 460, + 145, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 460, + 145, + 469 + ], + "spans": [ + { + "bbox": [ + 93, + 460, + 145, + 469 + ], + "type": "text", + "content": "Adjust or not?" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 93, + 475, + 141, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 475, + 141, + 483 + ], + "spans": [ + { + "bbox": [ + 93, + 475, + 141, + 483 + ], + "type": "text", + "content": "Rearrange it." + } + ] + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 175, + 402, + 227, + 415 + ], + "blocks": [ + { + "bbox": [ + 175, + 402, + 227, + 415 + ], + "lines": [ + { + "bbox": [ + 175, + 402, + 227, + 415 + ], + "spans": [ + { + "bbox": [ + 175, + 402, + 227, + 415 + ], + "type": "image", + "image_path": "28b766dc0d29f6273e26a9aed75c3de5a772a124ae3c9ab068cf1b8c96d348cf.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 176, + 419, + 224, + 429 + ], + "blocks": [ + { + "bbox": [ + 176, + 419, + 224, + 429 + ], + "lines": [ + { + "bbox": [ + 176, + 419, + 224, + 429 + ], + "spans": [ + { + "bbox": [ + 176, + 419, + 224, + 429 + ], + "type": "image", + "image_path": "b3a2c4ac17a94b2b36d9372be305b746a5effe392fb7d2a064b2cde37a70c8cf.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "lines": [ + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "spans": [ + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "text", + "content": "Figure 5. Comparison between " + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "text", + "content": " and SFT. SFT depends on manual and model-driven design to generate QA pairs for scene-level video understanding, " + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "text", + "content": ", in contrast, automatically constructs training data based on pre-defined rules that cover various temporal and spatial aspects of the video. " + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 525, + 295, + 602 + ], + "type": "text", + "content": " enables the model to focus on a fine-grained content analysis, and offering insights that supervised labels cannot achieve." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 177, + 434, + 224, + 444 + ], + "blocks": [ + { + "bbox": [ + 177, + 434, + 224, + 444 + ], + "lines": [ + { + "bbox": [ + 177, + 434, + 224, + 444 + ], + "spans": [ + { + "bbox": [ + 177, + 434, + 224, + 444 + ], + "type": "image", + "image_path": "5e842eed2a0e603ad9c20ae9db677b6cf056a6564fb67f982bd3e1a5900ebe1c.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 176, + 450, + 224, + 472 + ], + "blocks": [ + { + "bbox": [ + 176, + 450, + 224, + 472 + ], + "lines": [ + { + "bbox": [ + 176, + 450, + 224, + 472 + ], + "spans": [ + { + "bbox": [ + 176, + 450, + 224, + 472 + ], + "type": "image", + "image_path": "0a58625ba85cf5cfb4fb31336be2120b7989cbc33f398f387001052658cf027f.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 177, + 476, + 224, + 490 + ], + "blocks": [ + { + "bbox": [ + 177, + 476, + 224, + 490 + ], + "lines": [ + { + "bbox": [ + 177, + 476, + 224, + 490 + ], + "spans": [ + { + "bbox": [ + 177, + 476, + 224, + 490 + ], + "type": "image", + "image_path": "b27d6f514bd85f5074b842d69558d79d7428a8753afed274de875ac74caa6f02.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 251, + 403, + 274, + 415 + ], + "blocks": [ + { + "bbox": [ + 251, + 403, + 274, + 415 + ], + "lines": [ + { + "bbox": [ + 251, + 403, + 274, + 415 + ], + "spans": [ + { + "bbox": [ + 251, + 403, + 274, + 415 + ], + "type": "image", + "image_path": "429857c0df66bdaab0c9ae373e86e7fa148de34892ef91f2fc5df09ad7c95d16.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "bbox": [ + 242, + 420, + 257, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 420, + 257, + 429 + ], + "spans": [ + { + "bbox": [ + 242, + 420, + 257, + 429 + ], + "type": "text", + "content": "2nd" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 245, + 436, + 257, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 436, + 257, + 444 + ], + "spans": [ + { + "bbox": [ + 245, + 436, + 257, + 444 + ], + "type": "text", + "content": "No" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 238, + 460, + 257, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 460, + 257, + 468 + ], + "spans": [ + { + "bbox": [ + 238, + 460, + 257, + 468 + ], + "type": "text", + "content": "Yes" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 237, + 475, + 257, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 475, + 257, + 483 + ], + "spans": [ + { + "bbox": [ + 237, + 475, + 257, + 483 + ], + "type": "text", + "content": "3412" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 124, + 506, + 228, + 517 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 506, + 228, + 517 + ], + "spans": [ + { + "bbox": [ + 124, + 506, + 228, + 517 + ], + "type": "text", + "content": "Fragment-Level Tasks" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 55, + 629, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 295, + 713 + ], + "type": "text", + "content": "and Rearrangement—designed to train the model to rearrange a set of out-of-order frames into their original sequence. This is a robust indicator of a modal's mastery over the visual dynamics of an action, requiring the model to detect subtle frame changes and understand the overall coherence and temporal trends. Mastery of these tasks enables the model to recognize frames and their temporal re" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 313, + 72, + 553, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 227 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 227 + ], + "type": "text", + "content": "relationships, enhancing its ability to predict and reconstruct action sequences and improving performance on more complex video tasks. Our method first extracts multiple sets of dynamic keyframes from each video. These fragments capture the key dynamic information from multiple temporal perspectives, offering a more efficient representation of redundant video data. It then applies pseudo-labeling, distinguishing it from traditional video-level labeling. By designing proxy tasks that leverage intrinsic information rather than predefined prior knowledge, it smartly circumvents the annotation bottleneck, enabling a deeper temporal understanding and offering insights that traditional video-level labeling cannot achieve." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 313, + 229, + 553, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 229, + 553, + 312 + ], + "spans": [ + { + "bbox": [ + 313, + 229, + 553, + 312 + ], + "type": "text", + "content": "Counting We input N frames into the Video-LLM and ask it to count them. Although this task seems straightforward, it proves challenging for current Video-LLMs, particularly as the number of frames increases, revealing a decline in accuracy. The model's inability to perform basic quantitative tasks points to a broader limitations in understanding the overall sequence integrity." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 313, + 314, + 553, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 314, + 553, + 397 + ], + "spans": [ + { + "bbox": [ + 313, + 314, + 553, + 397 + ], + "type": "text", + "content": "Consistency Verification Video-LLMs are tasked with identifying two frames sampled from the same video, which may show subtle differences. This task sharpens the model's sensitivity to visual details by encouraging a thorough analysis and comparison of the images, countering its tendency to focus on primary subjects while neglecting the background and other subtle features." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 313, + 399, + 554, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 399, + 554, + 495 + ], + "spans": [ + { + "bbox": [ + 313, + 399, + 554, + 495 + ], + "type": "text", + "content": "Localization Video-LLMs must accurately locate a specified target (from video metadata) within a sequence of frames, identifying the frames in which it appears, disappears, or persists. This naturally human ability is a significant challenge for these models, as they often struggle to perceive sequential relationships between frames and face additional obstacles, such as occlusion, interference from similar objects, lighting variations, and memory limitations." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 313, + 497, + 554, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 497, + 554, + 628 + ], + "spans": [ + { + "bbox": [ + 313, + 497, + 554, + 628 + ], + "type": "text", + "content": "Disorder Detection and Rearrangement Video-LLMs must determine whether and how to adjust the order of a given frame sequence. When frames are randomized, the loss of spatiotemporal coherence and logical continuity makes it exceptionally challenging to reconstruct their original sequence, especially as interactions within frames become more complex [20]. This task is evaluated in two ways: the yes/no task tests the model's sensitivity to temporal consistency, while the sorting task, which leverages capabilities from the other four tasks, requires advanced reasoning and adjustments." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 313, + 644, + 394, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 644, + 394, + 657 + ], + "spans": [ + { + "bbox": [ + 313, + 644, + 394, + 657 + ], + "type": "text", + "content": "5. Experiments" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "text", + "content": "In this section, we fine-tuned four of the most advanced open-source Video-LLMs using the " + }, + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 665, + 553, + 713 + ], + "type": "text", + "content": " method to evaluate its effectiveness, alongside ablation studies and interpretability analyses to explore the underlying mechanisms." + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 82, + 70, + 531, + 190 + ], + "blocks": [ + { + "bbox": [ + 82, + 70, + 531, + 190 + ], + "lines": [ + { + "bbox": [ + 82, + 70, + 531, + 190 + ], + "spans": [ + { + "bbox": [ + 82, + 70, + 531, + 190 + ], + "type": "table", + "html": "
MethodsLLaVA-NEXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VL
ActionEffectSpeedActionEffectSpeedActionEffectSpeedActionEffectSpeed
Base37.3142.6722.3543.3752.5619.1363.2650.9219.8968.1857.1424.62
Base+SF2T48.6743.7724.8365.9160.6228.6067.4257.3331.6373.8663.3731.92
Base(SFT)62.6944.6322.3577.6575.0970.8377.6565.9429.7378.6066.3030.87
Base(SFT)+SF2T63.0745.2432.0181.6376.9286.7479.7368.6831.8281.2573.2632.38
", + "image_path": "217b75d5da1feb710205a3ea17f34a12c93a21948b855474060681bc48f62589.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 63, + 252, + 290, + 401 + ], + "blocks": [ + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "lines": [ + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "spans": [ + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "type": "text", + "content": "Table 4. Performance on FineVidBench. We tested on two baselines: (1) Base: Results without any fine-tuning. (2) Base(SFT): Results after fine-tuning in supervised way. After " + }, + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "type": "text", + "content": ", all models improved in all three tasks, highlighting its broad effectiveness and the value of fragment-level tasks in enhancing scene-level comprehension. Notably, " + }, + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 198, + 555, + 243 + ], + "type": "text", + "content": " outperformed SFT in the Speed task (except MiniCPM-V 2.6), highlighting the key role of fine-grained temporal understanding in distinguishing video speeds." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 63, + 252, + 290, + 401 + ], + "lines": [ + { + "bbox": [ + 63, + 252, + 290, + 401 + ], + "spans": [ + { + "bbox": [ + 63, + 252, + 290, + 401 + ], + "type": "table", + "html": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
MVBench
Base36.8440.2354.1855.97
Base+SF2T42.9256.0257.9763.76
Video-MME(no subtitle)
Base29.7643.1749.0243.77
Base+SF2T34.8453.1951.8853.60
MLVU
Base36.3241.5852.3242.81
Base+SF2T41.9155.3256.1154.67
", + "image_path": "887a2fb3f507996fa3a3946a0c447d7b2e1125f8a8a144abe6869e78185ea560.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 331, + 256, + 537, + 297 + ], + "blocks": [ + { + "bbox": [ + 55, + 408, + 297, + 443 + ], + "lines": [ + { + "bbox": [ + 55, + 408, + 297, + 443 + ], + "spans": [ + { + "bbox": [ + 55, + 408, + 297, + 443 + ], + "type": "text", + "content": "Table 5. Performance on public benchmarks. " + }, + { + "bbox": [ + 55, + 408, + 297, + 443 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 408, + 297, + 443 + ], + "type": "text", + "content": " consistently enhances performance across all three benchmarks, reaffirming its effectiveness as a spatiotemporal enhancer." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 331, + 256, + 537, + 297 + ], + "lines": [ + { + "bbox": [ + 331, + 256, + 537, + 297 + ], + "spans": [ + { + "bbox": [ + 331, + 256, + 537, + 297 + ], + "type": "table", + "html": "
Methodsrandomuniformkeyframemotion-salient
SF2T70.3171.6772.1173.86
", + "image_path": "d8920c8e1b5e723a0872e8b610991792488d583beccb64d01b2c1a9bfb280fac.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 370, + 356, + 499, + 397 + ], + "blocks": [ + { + "bbox": [ + 312, + 306, + 555, + 350 + ], + "lines": [ + { + "bbox": [ + 312, + 306, + 555, + 350 + ], + "spans": [ + { + "bbox": [ + 312, + 306, + 555, + 350 + ], + "type": "text", + "content": "Table 6. Impact of sampling. As shown, motion-salient area sampling outperforms others by better capturing motion fluidity and temporal details, while the other methods fail to fully utilize their potential, leading to suboptimal performance." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 370, + 356, + 499, + 397 + ], + "lines": [ + { + "bbox": [ + 370, + 356, + 499, + 397 + ], + "spans": [ + { + "bbox": [ + 370, + 356, + 499, + 397 + ], + "type": "table", + "html": "
Methodslongshortrandom
SF2T69.3871.4073.86
", + "image_path": "6aaa13d9fd9f9561511b88091ada04ad2bb2f209dcbe5e7e1c29745f1c9b4178.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 312, + 406, + 555, + 440 + ], + "lines": [ + { + "bbox": [ + 312, + 406, + 555, + 440 + ], + "spans": [ + { + "bbox": [ + 312, + 406, + 555, + 440 + ], + "type": "text", + "content": "Table 7. Impact of temporal span. Both long- and short-range temporal modeling reduced " + }, + { + "bbox": [ + 312, + 406, + 555, + 440 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 312, + 406, + 555, + 440 + ], + "type": "text", + "content": " 's performance, emphasizing the importance of multi-scale temporal modeling." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 463, + 188, + 475 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 463, + 188, + 475 + ], + "spans": [ + { + "bbox": [ + 55, + 463, + 188, + 475 + ], + "type": "text", + "content": "5.1. Implementation Details" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 483, + 296, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 483, + 296, + 590 + ], + "spans": [ + { + "bbox": [ + 55, + 483, + 296, + 590 + ], + "type": "text", + "content": "To ensure fairness, experiments were conducted on LoRA-compatible models, including LLaVA-NeXT-Video[9], MiniCPM-V 2.6[34], VideoLLaMA 2.1[4] and Qwen2-VL[28], using their default or recommended settings, with all models trained for one epoch. All experiments were performed under identical hardware conditions, utilizing NVIDIA A100 40GB GPU for computation. It should be emphasized that our goal is to validate the effectiveness of " + }, + { + "bbox": [ + 55, + 483, + 296, + 590 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 483, + 296, + 590 + ], + "type": "text", + "content": ", not to optimize models for maximum performance." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": "We randomly sampled videos from SSv2 and MiT for training, ensuring no overlap with the FineVidBench dataset. MGSampler [37] was used to extract N sets of M-frame sequences from each video, capturing dynamic changes while preserving overall characteristics. M is chosen based on the video's characteristics to capture content flow, while N is determined by content complexity, with more complex content requiring a larger N to cover more temporal perspectives. In this study, we set " + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "inline_equation", + "content": "\\mathrm{N} = 3" + }, + { + "bbox": [ + 55, + 594, + 296, + 714 + ], + "type": "text", + "content": " and M between 3 and 5, though these values may vary for other" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 463, + 555, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 463, + 555, + 536 + ], + "spans": [ + { + "bbox": [ + 313, + 463, + 555, + 536 + ], + "type": "text", + "content": "datasets. We then generated QA pairs for each frame sequence based on the five tasks defined in " + }, + { + "bbox": [ + 313, + 463, + 555, + 536 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 463, + 555, + 536 + ], + "type": "text", + "content": " for training. Evaluations were performed on FineVidBench's scene-level tasks, including Action, Effect, and Speed. To compare with traditional SFT, we also generated and manually reviewed QA pairs for these videos in a supervised setting." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 550, + 400, + 563 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 550, + 400, + 563 + ], + "spans": [ + { + "bbox": [ + 313, + 550, + 400, + 563 + ], + "type": "text", + "content": "5.2. Comparisons" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "content": "Table 4 summarizes the results of the scene-level tasks. After " + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "content": " training, all models showed significant improvement, emphasizing that fragment-level tasks can notably enhance scene-level comprehension. Integrating " + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "content": " with SFT is also leads to performance gains, demonstrating that fragment-level training positively impacts SFT and enhances its effectiveness. Surprisingly, in the Speed task, many base models outperformed SFT after applying " + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "content": ", highlighting the importance of fine-grained temporal understanding in distinguishing video speeds. This improvement likely stems from " + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 570, + 556, + 715 + ], + "type": "text", + "content": "'s ability to enhance the model's sensitivity to temporal cues, such as the loss or enhancement of" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": "information during acceleration or deceleration, as well as content coherence—all crucial for speed judgment. As expected, " + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": " currently lags behind SFT, since its training objective is not fully aligned with scene-level tasks. However, we do not expect " + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 72, + 294, + 156 + ], + "type": "text", + "content": " to replace supervised finetuning; rather, our experiments suggest that it can serve as an effortless and effective complement to SFT." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "spans": [ + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "text", + "content": "In addition to FineVidBench, we evaluated " + }, + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "text", + "content": " on three public video understanding benchmarks (Table 5). The results demonstrate consistent improvements across various video tasks, validating " + }, + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 157, + 294, + 229 + ], + "type": "text", + "content": " as an effective spatiotemporal enhancer for a wide range of video understanding tasks. All models were tested with an 8-frame input." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 239, + 258, + 251 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 239, + 258, + 251 + ], + "spans": [ + { + "bbox": [ + 55, + 239, + 258, + 251 + ], + "type": "text", + "content": "5.3. Ablation and Interpretability Analyses" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 257, + 295, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 257, + 295, + 460 + ], + "spans": [ + { + "bbox": [ + 55, + 257, + 295, + 460 + ], + "type": "text", + "content": "We evaluated the impact of frame sampling strategies on " + }, + { + "bbox": [ + 55, + 257, + 295, + 460 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 257, + 295, + 460 + ], + "type": "text", + "content": ", as each method provides a unique \"temporal information perspective\" that influencing video understanding performance. As shown in Table 6, we assessed four strategies on Qwen2-VL in the Action task: random, uniform interval, keyframe, and motion-salient area sampling [37]. Motion-salient area sampling performed best, likely due to its ability to capture continuous motion dynamics, thereby enhancing the model's understanding of action fluidity and temporal detail. In comparison, the other methods had limitations: keyframe sampling misses intermediate action phases, fixed-interval sampling may overlook critical moments, and random sampling lacks temporal consistency. Notably, different datasets may favor different strategies. For example, some datasets may perform better with uniform interval sampling, or their motion features may align better with the model's specific capabilities." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "spans": [ + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "text", + "content": "We examined the effects of long- and short-range temporal modeling on " + }, + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "text", + "content": ". In the Consistency Verification task, we constrained the random selection of frame pairs to adjacent frames for local continuity or non-adjacent frames to capture long-range dependencies. As shown in Table 7, both settings decreased " + }, + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 461, + 295, + 605 + ], + "type": "text", + "content": "'s performance on the Action task of Qwen2-VL, indicating that an overemphasis on either long- or short-range information leads to temporal imbalance and incomplete dynamics. This underscores the importance of combining both approaches to leverage their broader temporal span and frame variations for a more comprehensive feature representation." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "text", + "content": "We analyzed the attention map of Qwen2-VL on the Action task, particularly in cases where the model's predictions were corrected after " + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "text", + "content": ". As shown in Figure 6, we found that " + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "text", + "content": " enhances the model's ability to capture fine-grained spatial changes and temporal dynamics. (1) Spatial Aspects. After " + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 605, + 295, + 713 + ], + "type": "text", + "content": ", the model shows increased attention to action execution areas, particularly the hands and objects they interact with. It shows better sensitivity to small targets, likely due to the Consistency Verification" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 328, + 70, + 541, + 357 + ], + "blocks": [ + { + "bbox": [ + 328, + 70, + 541, + 357 + ], + "lines": [ + { + "bbox": [ + 328, + 70, + 541, + 357 + ], + "spans": [ + { + "bbox": [ + 328, + 70, + 541, + 357 + ], + "type": "image", + "image_path": "3d46cc1d8054d539ff26f9bae25b46b21842a0dd60dcb241845cd705e00f23fd.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "lines": [ + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "spans": [ + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "text", + "content": "Figure 6. Two exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - " + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "text", + "content": " applied. As shown by the red boxes, after applying " + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "text", + "content": ", the model better focuses on action execution areas and interacting objects. The " + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 313, + 365, + 555, + 441 + ], + "type": "text", + "content": " fine-tuned model has the ability to predict the direction of motion, as seen in the trajectories of the red bottle and Cheerios." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 472, + 555, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 472, + 555, + 557 + ], + "spans": [ + { + "bbox": [ + 313, + 472, + 555, + 557 + ], + "type": "text", + "content": "task, which enhances spatial perception by refining sensitivity to subtle image differences. (2) Temporal Aspects. After " + }, + { + "bbox": [ + 313, + 472, + 555, + 557 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 472, + 555, + 557 + ], + "type": "text", + "content": ", we observed that the model can predict object movement trajectories in certain actions, indicating an advanced level of temporal understanding. This ability likely stems from the sorting task, which strengthens the model's comprehension of action flows and movement patterns." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 581, + 388, + 594 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 581, + 388, + 594 + ], + "spans": [ + { + "bbox": [ + 314, + 581, + 388, + 594 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "text", + "content": "In this work, we propose " + }, + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "text", + "content": " to overcome the limitations of Video-LLMs in fine-grained video understanding. " + }, + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 313, + 605, + 555, + 713 + ], + "type": "text", + "content": " is an innovative fine-tuning method that eliminates the need for labor-intensive annotations and effectively bypasses the constraints of natural language descriptions. Additionally, we introduce FineVidBench, a benchmark for evaluating Video-LLMs at both scene and fragment levels. In the future, we plan to expand our dataset with larger videos and more tasks to increase its impact." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 153, + 85 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "spans": [ + { + "bbox": [ + 55, + 91, + 297, + 236 + ], + "type": "text", + "content": "This work is supported by the National Key Research and Development Program of China (No.2020YBF2901202), National Natural Science Foundation of China (NSFC No. 62272184 and No. 62402189), the China Postdoctoral Science Foundation under Grant Number GZC20230894, the China Postdoctoral Science Foundation (Certificate Number: 2024M751012), and the Postdoctor Project of Hubei Province under Grant Number 2024HBBHCXB014, and the \"Pioneer\" and \"Leading Goose\" R&D Program of Zhejiang (No. 2024C01161). The computation is completed in the HPC Platform of Huazhong University of Science and Technology." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "spans": [ + { + "bbox": [ + 56, + 246, + 115, + 258 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 266, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 61, + 266, + 296, + 287 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 266, + 296, + 287 + ], + "spans": [ + { + "bbox": [ + 61, + 266, + 296, + 287 + ], + "type": "text", + "content": "[1] FirstName Alpher. Frobnication. IEEE TPAMI, 12(1):234-778, 2002. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 289, + 296, + 343 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 289, + 296, + 343 + ], + "spans": [ + { + "bbox": [ + 61, + 289, + 296, + 343 + ], + "type": "text", + "content": "[2] Lin Chen, Xilin Wei, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Bin Lin, Zhenyu Tang, et al. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325, 2024. 2, 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 345, + 296, + 389 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 345, + 296, + 389 + ], + "spans": [ + { + "bbox": [ + 62, + 345, + 296, + 389 + ], + "type": "text", + "content": "[3] Xiuyuan Chen, Yuan Lin, Yuchen Zhang, and Weiran Huang. Autoeval-video: An automatic benchmark for assessing large vision language models in open-ended video question answering. arXiv preprint arXiv:2311.14906, 2023. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 390, + 296, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 390, + 296, + 444 + ], + "spans": [ + { + "bbox": [ + 62, + 390, + 296, + 444 + ], + "type": "text", + "content": "[4] Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, et al. Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms. arXiv preprint arXiv:2406.07476, 2024. 1, 2, 3, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 446, + 296, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 446, + 296, + 500 + ], + "spans": [ + { + "bbox": [ + 62, + 446, + 296, + 500 + ], + "type": "text", + "content": "[5] Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 502, + 296, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 502, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 62, + 502, + 296, + 578 + ], + "type": "text", + "content": "[6] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pages 5842-5850, 2017. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 580, + 296, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 580, + 296, + 645 + ], + "spans": [ + { + "bbox": [ + 62, + 580, + 296, + 645 + ], + "type": "text", + "content": "[7] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 647, + 296, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 647, + 296, + 690 + ], + "spans": [ + { + "bbox": [ + 62, + 647, + 296, + 690 + ], + "type": "text", + "content": "[8] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 62, + 692, + 296, + 713 + ], + "type": "text", + "content": "[9] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-last-interleave:" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 554, + 713 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 554, + 95 + ], + "type": "text", + "content": "Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895, 2024. 3, 7" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 98, + 554, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 98, + 554, + 151 + ], + "spans": [ + { + "bbox": [ + 316, + 98, + 554, + 151 + ], + "type": "text", + "content": "[10] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR, 2023. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 154, + 554, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 154, + 554, + 198 + ], + "spans": [ + { + "bbox": [ + 316, + 154, + 554, + 198 + ], + "type": "text", + "content": "[11] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023. 1, 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 201, + 554, + 265 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 201, + 554, + 265 + ], + "spans": [ + { + "bbox": [ + 317, + 201, + 554, + 265 + ], + "type": "text", + "content": "[12] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 268, + 554, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 268, + 554, + 300 + ], + "spans": [ + { + "bbox": [ + 316, + 268, + 554, + 300 + ], + "type": "text", + "content": "[13] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043, 2023. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 304, + 554, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 304, + 554, + 346 + ], + "spans": [ + { + "bbox": [ + 316, + 304, + 554, + 346 + ], + "type": "text", + "content": "[14] Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. In European Conference on Computer Vision, pages 323–340. Springer, 2025. 1" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 350, + 554, + 382 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 350, + 554, + 382 + ], + "spans": [ + { + "bbox": [ + 316, + 350, + 554, + 382 + ], + "type": "text", + "content": "[15] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 384, + 554, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 384, + 554, + 428 + ], + "spans": [ + { + "bbox": [ + 317, + 384, + 554, + 428 + ], + "type": "text", + "content": "[16] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 430, + 554, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 430, + 554, + 484 + ], + "spans": [ + { + "bbox": [ + 317, + 430, + 554, + 484 + ], + "type": "text", + "content": "[17] Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration. arXiv preprint arXiv:2306.09093, 2023. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 487, + 554, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 487, + 554, + 531 + ], + "spans": [ + { + "bbox": [ + 317, + 487, + 554, + 531 + ], + "type": "text", + "content": "[18] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 533, + 554, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 533, + 554, + 578 + ], + "spans": [ + { + "bbox": [ + 317, + 533, + 554, + 578 + ], + "type": "text", + "content": "[19] Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 1, 2020. 1" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 580, + 554, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 580, + 554, + 643 + ], + "spans": [ + { + "bbox": [ + 317, + 580, + 554, + 643 + ], + "type": "text", + "content": "[20] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 527-544. Springer, 2016. 6" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 647, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 554, + 713 + ], + "type": "text", + "content": "[21] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Tom Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence, 42(2):502-508, 2019. 2, 3" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 126 + ], + "type": "text", + "content": "[22] Long Qian, Juncheng Li, Yu Wu, Yaobo Ye, Hao Fei, TatSeng Chua, Yueting Zhuang, and Siliang Tang. Momentor: Advancing video large language model with fine-grained temporal reasoning. arXiv preprint arXiv:2402.11435, 2024. 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 183 + ], + "type": "text", + "content": "[23] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67, 2020. 1" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 184, + 294, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 184, + 294, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 184, + 294, + 239 + ], + "type": "text", + "content": "[24] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 239, + 294, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 239, + 294, + 270 + ], + "spans": [ + { + "bbox": [ + 56, + 239, + 294, + 270 + ], + "type": "text", + "content": "[25] Fangxun Shu, Lei Zhang, Hao Jiang, and Cihang Xie. Audio-visual llm for video understanding. arXiv preprint arXiv:2312.06720, 2023. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 272, + 295, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 272, + 295, + 327 + ], + "spans": [ + { + "bbox": [ + 56, + 272, + 295, + 327 + ], + "type": "text", + "content": "[26] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464-7473, 2019. 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 327, + 294, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 294, + 370 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 294, + 370 + ], + "type": "text", + "content": "[27] TencentQQ Multimedia Research Team. Video-cam: Advancing video-language understanding with causal cross-attention masks. https://github.com/QQ-MM/Video-CCAM, 2024.4" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 372, + 294, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 372, + 294, + 426 + ], + "spans": [ + { + "bbox": [ + 56, + 372, + 294, + 426 + ], + "type": "text", + "content": "[28] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 1, 3, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 427, + 294, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 427, + 294, + 480 + ], + "spans": [ + { + "bbox": [ + 56, + 427, + 294, + 480 + ], + "type": "text", + "content": "[29] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Intervid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 482, + 294, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 482, + 294, + 536 + ], + "spans": [ + { + "bbox": [ + 56, + 482, + 294, + 536 + ], + "type": "text", + "content": "[30] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 537, + 294, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 537, + 294, + 592 + ], + "spans": [ + { + "bbox": [ + 56, + 537, + 294, + 592 + ], + "type": "text", + "content": "[31] Haiyang Xu, Qinghao Ye, Xuan Wu, Ming Yan, Yuan Miao, Jiabo Ye, Guohai Xu, Anwen Hu, Yaya Shi, Guangwei Xu, et al. Youku-mplug: A 10 million large-scale chinese video-language dataset for pre-training and benchmarks. arXiv preprint arXiv:2306.04362, 2023. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 593, + 294, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 593, + 294, + 646 + ], + "spans": [ + { + "bbox": [ + 56, + 593, + 294, + 646 + ], + "type": "text", + "content": "[32] Mingze Xu, Mingfei Gao, Zhe Gan, Hong-You Chen, Zhengfeng Lai, Haiming Gang, Kai Kang, and Afshin Dehghan. Slowfast-llava: A strong training-free baseline for video large language models. arXiv preprint arXiv:2407.15841, 2024. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 647, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 647, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 647, + 294, + 713 + ], + "type": "text", + "content": "[33] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 363 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 117 + ], + "type": "text", + "content": "[34] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800, 2024. 1, 3, 7" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 118, + 553, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 118, + 553, + 150 + ], + "spans": [ + { + "bbox": [ + 316, + 118, + 553, + 150 + ], + "type": "text", + "content": "[35] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023. 1, 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 152, + 553, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 152, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 152, + 553, + 205 + ], + "type": "text", + "content": "[36] Zijia Zhao, Haoyu Lu, Yuqi Huo, Yifan Du, Tongtian Yue, Longteng Guo, Bingning Wang, Weipeng Chen, and Jing Liu. Needle in a video haystack: A scalable synthetic framework for benchmarking video mllms. arXiv preprint arXiv:2406.09367, 2024. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 208, + 553, + 261 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 208, + 553, + 261 + ], + "spans": [ + { + "bbox": [ + 316, + 208, + 553, + 261 + ], + "type": "text", + "content": "[37] Yuan Zhi, Zhan Tong, Limin Wang, and Gangshan Wu. Mgsampler: An explainable sampling strategy for video action recognition. In Proceedings of the IEEE/CVF International conference on Computer Vision, pages 1513-1522, 2021. 7, 8" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 263, + 553, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 263, + 553, + 317 + ], + "spans": [ + { + "bbox": [ + 316, + 263, + 553, + 317 + ], + "type": "text", + "content": "[38] Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264, 2024. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 319, + 553, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 319, + 553, + 363 + ], + "spans": [ + { + "bbox": [ + 316, + 319, + 553, + 363 + ], + "type": "text", + "content": "[39] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 2" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 68, + 542, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 68, + 542, + 105 + ], + "spans": [ + { + "bbox": [ + 67, + 68, + 542, + 105 + ], + "type": "inline_equation", + "content": "\\mathbf{SF^{2}T}" + }, + { + "bbox": [ + 67, + 68, + 542, + 105 + ], + "type": "text", + "content": ": Self-supervised Fragment Finetuning of Video-LLMs for Fine-Grained Understanding" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 234, + 114, + 375, + 131 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 234, + 114, + 375, + 131 + ], + "spans": [ + { + "bbox": [ + 234, + 114, + 375, + 131 + ], + "type": "text", + "content": "Supplementary Material" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 144, + 295, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 144, + 295, + 194 + ], + "spans": [ + { + "bbox": [ + 55, + 144, + 295, + 194 + ], + "type": "text", + "content": "In this supplementary material, Section A presents " + }, + { + "bbox": [ + 55, + 144, + 295, + 194 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 55, + 144, + 295, + 194 + ], + "type": "text", + "content": "'s performance on video caption tasks and additional exemplary visualizations of the attention map, while Section B provides more details about FineVidBench." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 202, + 196, + 216 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 202, + 196, + 216 + ], + "spans": [ + { + "bbox": [ + 55, + 202, + 196, + 216 + ], + "type": "text", + "content": "A. More Results and Cases" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 223, + 295, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 223, + 295, + 331 + ], + "spans": [ + { + "bbox": [ + 55, + 223, + 295, + 331 + ], + "type": "text", + "content": "In addition to FineVidBench and public video understanding benchmarks, we also evaluated the video caption task (Table 1) using GPT-4o mini, assessing fluency, relevance, informativeness, and correctness, with a maximum score of 40. The results show that incorporating " + }, + { + "bbox": [ + 55, + 223, + 295, + 331 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 55, + 223, + 295, + 331 + ], + "type": "text", + "content": " improves performance, highlighting that fine-grained understanding also benefits video captioning. However, after fine-tuning, MiniCPM-V 2.6 produced shorter responses, leading to a decrease in its informativeness score." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 56, + 337, + 298, + 406 + ], + "blocks": [ + { + "bbox": [ + 56, + 337, + 298, + 406 + ], + "lines": [ + { + "bbox": [ + 56, + 337, + 298, + 406 + ], + "spans": [ + { + "bbox": [ + 56, + 337, + 298, + 406 + ], + "type": "table", + "html": "
MethodsLLaVA-NeXT -VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2 -VL
Base33.2032.6122.5329.76
Base+SF2T33.2929.73 ↓30.9930.05
Base(SFT)27.6229.6027.1929.66
Base(SFT)+SF2T30.5031.3128.9431.04
", + "image_path": "e649ab7b72444c37363694726d639ac3bbdb25a6eedefd741ef6f75f8da50a71.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 415, + 295, + 449 + ], + "lines": [ + { + "bbox": [ + 55, + 415, + 295, + 449 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 295, + 449 + ], + "type": "text", + "content": "Table 1. Performance on video caption task. The results show that incorporating " + }, + { + "bbox": [ + 55, + 415, + 295, + 449 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 55, + 415, + 295, + 449 + ], + "type": "text", + "content": " yields higher scores (except MiniCPM-V 2.6), likely due to its enhanced temporal sensitivity and understanding." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 55, + 463, + 295, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 463, + 295, + 499 + ], + "spans": [ + { + "bbox": [ + 55, + 463, + 295, + 499 + ], + "type": "text", + "content": "As shown in Figure 1, we present more attention maps for Qwen2-VL on the Action task, focusing on cases where the model's predictions were corrected after applying " + }, + { + "bbox": [ + 55, + 463, + 295, + 499 + ], + "type": "inline_equation", + "content": "\\mathrm{SF}^2\\mathrm{T}" + }, + { + "bbox": [ + 55, + 463, + 295, + 499 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 509, + 194, + 521 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 509, + 194, + 521 + ], + "spans": [ + { + "bbox": [ + 55, + 509, + 194, + 521 + ], + "type": "text", + "content": "B. Details of FinevidBench" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 529, + 212, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 529, + 212, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 529, + 212, + 541 + ], + "type": "text", + "content": "B.1. Question-Answer Templates" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "type": "text", + "content": "Table 2 delineates the question templates for each task. For the answers, Scene-level tasks include Action task, which are composed of the \"visual synonyms\" and other verbs; Effect task, which are scripted by researchers based on video content; and Speed task, which offer fixed options: fast, slow, normal, and no speed. Fragment-level tasks encompass Frame Count, with answers ranging from 2 to 6; Meaning of Order, using ordinal numbers as responses; Frame Comparison and Adjust or Not, with responses of Yes, No, and Not sure; and Rearrangement, where the answer is a permutation of N numbers, with N representing the number of input frames. The Question-Answer database is generated through a process of template creation followed by iterative refinement using GPT-4. For Action and Effect tasks," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 144, + 555, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 144, + 555, + 205 + ], + "spans": [ + { + "bbox": [ + 313, + 144, + 555, + 205 + ], + "type": "text", + "content": "each original video is queried three times using different question formulations. For Speed tasks, one query is conducted for both the original and the speed-altered versions of the video. For Fragment-Level tasks, all five questions are posed for each unique frame count." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 211, + 415, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 211, + 415, + 224 + ], + "spans": [ + { + "bbox": [ + 313, + 211, + 415, + 224 + ], + "type": "text", + "content": "B.2. Detailed Results" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 229, + 375, + 240 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 229, + 375, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 229, + 375, + 240 + ], + "type": "text", + "content": "- Scene Level" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 243, + 555, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 243, + 555, + 519 + ], + "spans": [ + { + "bbox": [ + 313, + 243, + 555, + 519 + ], + "type": "text", + "content": "Table 3 illustrates the types of action effects and examples in the Effect tasks. For the affected objects, common physical attributes and quantities of objects are considered; notably, the positional relationship, spatial distance, and similarity between two objects are examined. Regarding action attributes, the intensity and completeness of the action are evaluated. Special actions include slight movement, multiple-object movements where several affected objects undergo motion, and compound movements involving two or more atomic actions linked in time. Additionally, camera movements and the inclination of the surface on which objects move are assessed. Table 4 presents the results categorized under the Effect classification. Overall, models performed well in Physical Attributes and Action Intensity, likely due to the ability to infer such information by comparing images before and after the action occurs. However, models exhibited subpar performance in Action Completion and Camera Motion. The former suggests a lack of understanding regarding the distinction between completed and incomplete actions in terms of their effects, while the latter is attributable to the inherent variability and complexity of camera movements. For other tasks, the majority of models exhibited moderate performance." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 525, + 392, + 537 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 525, + 392, + 537 + ], + "spans": [ + { + "bbox": [ + 313, + 525, + 392, + 537 + ], + "type": "text", + "content": "- Fragment Level" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 540, + 555, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 540, + 555, + 708 + ], + "spans": [ + { + "bbox": [ + 313, + 540, + 555, + 708 + ], + "type": "text", + "content": "Table 5 presents the results for all tasks in the fragment level under varying input frame counts. From the results, we can observe that except for Video-CCAM, the models' ability to count frames significantly declines as the frame count increases. Regarding the understanding of order concepts, most models show a clear upward trend, except for ShareGPT4Video. Models generally perform well on the frame comparison task, likely due to extensive training with image-text pairs. Since the input consistently involves two frames, the results show no significant variation, as expected. For Rearrangement, all results hover around random values, suggesting that while models recognize incorrect sequence orders, they cannot correct them, indicating a failure to grasp the dynamic processes of videos truly." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 82, + 72, + 528, + 349 + ], + "blocks": [ + { + "bbox": [ + 82, + 72, + 528, + 349 + ], + "lines": [ + { + "bbox": [ + 82, + 72, + 528, + 349 + ], + "spans": [ + { + "bbox": [ + 82, + 72, + 528, + 349 + ], + "type": "image", + "image_path": "23cdbc4c335792960b7d2d8a1e4e2928f978a8c323016f3aa1f2b2984b02bfc5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "lines": [ + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "spans": [ + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "type": "text", + "content": "Figure 1. Four exemplary visualizations of the attention map on Qwen2-VL. For each example: top - Original frames; middle - Base (SFT); bottom - " + }, + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "type": "text", + "content": " applied. As highlighted by the red boxes, applying " + }, + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "type": "inline_equation", + "content": "\\mathrm{SF^2T}" + }, + { + "bbox": [ + 55, + 358, + 555, + 392 + ], + "type": "text", + "content": " enables the model to better focus on action execution areas and interacting objects, while also predicting the direction of motion." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 56, + 400, + 561, + 679 + ], + "blocks": [ + { + "bbox": [ + 56, + 400, + 561, + 679 + ], + "lines": [ + { + "bbox": [ + 56, + 400, + 561, + 679 + ], + "spans": [ + { + "bbox": [ + 56, + 400, + 561, + 679 + ], + "type": "table", + "html": "
TasksQuestion
Scene LevelActionWhich activity can be seen in the video?
EffectAfter the action takes place, what changes occur to the object?
During the process of the action, what changes occur to the object?
After the action takes place, what changes occur in the field of vision?
SpeedWhat is the rate of movement in the video?
Fragment LevelFrame CountCould you please tell me how many frames I have inputted?
Meaning of OrderIn the sequence of frames provided, on which frame does the object first appear?
In the sequence of frames provided, on which frame does the object last appear?
In the sequence of frames provided, in which frames does the object exist?
Frame ComparisonAre the two frames I provided exactly the same?
Adjust or NotThese frames are all from the same video and capture the dynamic process of an action. The order of these frames may have been mixed up. Do we need to rearrange them to match the normal execution sequence of the action?
RearrangementThese frames are all from the same video and depict the dynamic process of an action. The order of these frames may have been mixed up. Based on the connections between the image frames, which of the following options represents the most appropriate sequence?
", + "image_path": "cd470606075ce8039139134a6a30f3dfda262ecce420c30962c766eb0017936c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 687, + 555, + 709 + ], + "lines": [ + { + "bbox": [ + 55, + 687, + 555, + 709 + ], + "spans": [ + { + "bbox": [ + 55, + 687, + 555, + 709 + ], + "type": "text", + "content": "Table 2. Question templates authored by researchers undergo revision by GPT-4o, which rephrases them to maintain the original intent while introducing varied sentence structures and vocabulary." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 61, + 69, + 552, + 702 + ], + "blocks": [ + { + "bbox": [ + 61, + 69, + 552, + 702 + ], + "lines": [ + { + "bbox": [ + 61, + 69, + 552, + 702 + ], + "spans": [ + { + "bbox": [ + 61, + 69, + 552, + 702 + ], + "type": "table", + "html": "
Effect TypeExamples
Object PropertiesPhysical PropertiesWhat modifications occur to the wafer stick as a result of the action? \nA. Not sure B. Nothing happened C. It broke D. It deformed
QuantityOnce the action occurs, what changes are made to the mugs? \nA. There are about 5 or 6 mugs here B. There are about 1 or 2 mugs here \nC. There are about 3 or 4 mugs here D. Not sure
Object RelationshipsPositionWhat adjustments take place in the egg following the action? \nA. An object appeared on top of it B. An object appeared in front of it \nC. An object appeared inside it D. An object appeared behind it
DistanceWhat changes happen to the chili and the cucumber after the action is performed? \nA. They grew more distant B. It's unclear \nC. They came nearer D. Their separation remained consistent
SimilarityWhat adjustments take place in the box following the action? \nA. One thing appeared above it \nB. Several things appeared above it, and they looked different from each other \nC. Not sure \nD. Several things appeared above it, and they looked similar to each other
Action PropertiesIntensityWhat alterations are observed in the paper cups after the action is taken? \nA. Not sure B. It collapsed C. It broke D. It remained standing
CompletionAfter the action is done, what modifications occur to the onion? \nA. It appears unchanged from how it was initially \nB. Something was visible at the back of it \nC. An item appeared on its surface \nD. Something was detected below it
Special ActionsSlight MovementWhat adjustments take place in the shower pouf during the action? \nA. I'm uncertain B. It dropped to the ground C. It was nearly at rest D. It ascended
Mutiple-ObjectWhat happens to the two chargers while the action is executed? \nA. They crossed paths B. They impacted each other \nC. They proceeded in the same direction D. It's unclear
CompoundDuring the process of action, what modifications are observed in the plate? \nA. It fell after leaving the hand and did not come back \nB. It was continuously held without any separation \nC. It was detached from the hand but later reattached \nD. Unclear
OthersCamera movementWhat alterations are evident in the flower while the action is carried out? \nA. It appeared to move to the right in view B. It appeared to ascend in view \nC. It appeared to move to the left in view D. I can't determine
Surface InclinationAfter the action is taken, what changes are noticed in the cup? \nA. It was stationary on a tilted surface B. It was stationary on a horizontal surface \nC. Not sure D. It rolled down a sloped surface
", + "image_path": "0319bb62a6c7c45a00954b86bf7d0f3bcf0e06eb20112b16245d801ed8821d52.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 251, + 711, + 359, + 721 + ], + "lines": [ + { + "bbox": [ + 251, + 711, + 359, + 721 + ], + "spans": [ + { + "bbox": [ + 251, + 711, + 359, + 721 + ], + "type": "text", + "content": "Table 3. Types of Effect Task" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 66, + 89, + 545, + 333 + ], + "blocks": [ + { + "bbox": [ + 66, + 89, + 545, + 333 + ], + "lines": [ + { + "bbox": [ + 66, + 89, + 545, + 333 + ], + "spans": [ + { + "bbox": [ + 66, + 89, + 545, + 333 + ], + "type": "table", + "html": "
Effect Type (Random: 25.00)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4-VideoVideo-CCAMAvg.
Object PropertiesPhysical Properties44.2049.2852.1760.8747.5463.4852.92
Quantity33.3347.6256.1958.1041.9060.9549.68
Object RelationshipsPosition41.0351.2849.2354.3640.3150.3647.76
Distance39.5646.6740.8940.4440.4448.4442.74
Similarity42.8649.5247.6252.3838.1059.0548.25
Action PropertiesIntensity40.2750.6753.3361.3352.5362.1353.38
Completion39.3143.6838.8535.6348.0534.0239.92
Special ActionsSlight Movement47.9243.7541.6772.9235.4254.5849.38
Multiple-Object50.0060.6776.6766.6740.6758.6758.89
Compound48.1544.4451.1152.5935.5653.3347.53
OthersCamera Movement33.3322.2228.8926.6732.2228.8928.70
Surface Inclination28.5749.5258.5760.4841.4351.4348.33
", + "image_path": "1c017519b4dd297ab87f91be6c92044ca1ad34f27731bb71dfa53be4193d82a8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 59, + 421, + 552, + 658 + ], + "blocks": [ + { + "bbox": [ + 55, + 342, + 555, + 376 + ], + "lines": [ + { + "bbox": [ + 55, + 342, + 555, + 376 + ], + "spans": [ + { + "bbox": [ + 55, + 342, + 555, + 376 + ], + "type": "text", + "content": "Table 4. The results of the Effect task, dissected into more granular categories. Overall, Qwen2-VL achieved the best results, with Video-CCAM closely following. Notably, models exhibit suboptimal performance in distinguishing completed from incomplete actions, indicating a lack of ability to associate actions with the resulting state changes of objects." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 59, + 421, + 552, + 658 + ], + "lines": [ + { + "bbox": [ + 59, + 421, + 552, + 658 + ], + "spans": [ + { + "bbox": [ + 59, + 421, + 552, + 658 + ], + "type": "table", + "html": "
Input(Random)LLaVA-NeXT-VideoMiniCPM-V 2.6VideoLLaMA 2.1Qwen2-VLShareGPT4VideoVideo-CCAM
3q125.0020.3393.8242.8697.2560.9914.18
q225.0019.2348.9035.7129.1276.1538.35
q333.3346.9680.6671.2771.8288.4166.34
q433.3369.2365.3881.5480.0075.5580.06
q525.0023.8523.0833.0827.6923.6823.36
4q125.0019.7790.6639.8996.6316.788.96
q225.0024.1660.6741.0133.1565.4243.65
q333.3358.7678.5376.8477.4087.2363.63
q433.3374.4279.8593.8095.3587.5094.46
q525.0019.3814.7324.8120.9323.1022.94
5q125.0017.9886.447.4596.050.0047.61
q225.0028.8159.8950.2837.8541.0055.24
q333.3355.6867.6180.1174.4389.6964.83
q433.3382.8184.3894.5396.8891.5596.49
q525.0018.7516.4122.6618.7523.2923.92
", + "image_path": "62e0cea972697992ac6e19803b67f81b78c8f447611fcd93268e33d68991c90f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 669, + 555, + 692 + ], + "lines": [ + { + "bbox": [ + 55, + 669, + 555, + 692 + ], + "spans": [ + { + "bbox": [ + 55, + 669, + 555, + 692 + ], + "type": "text", + "content": "Table 5. The results of all tasks in Fragment-Level under varying input frame counts. Questions q1 through q5 correspond to Frame Count, Meaning of Order, Frame Comparison, Adjust or Not, and Rearrangement, respectively." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_content_list.json b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b989efbde8cd7ebc39262960aaeca47f2e7d31d9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_content_list.json @@ -0,0 +1,4005 @@ +[ + { + "type": "text", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "text_level": 1, + "bbox": [ + 88, + 113, + 903, + 136 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ZHIWEI XU, YUJUAN WU, SHIHENG WANG, JIABAO GAO, TIAN QIU, ZIQI WANG, HAI WAN, and XIBIN ZHAO*, KLISS, BNRist, School of Software, Tsinghua University, China", + "bbox": [ + 86, + 150, + 907, + 184 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Intrusion Detection Systems (IDS) have long been a hot topic in the cybersecurity community. In recent years, with the introduction of deep learning (DL) techniques, IDS have made great progress due to their increasing generalizability. The rationale behind this is that by learning the underlying patterns of known system behaviors, IDS detection can be generalized to intrusions that exploit zero-day vulnerabilities. In this survey, we refer to this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this survey systematically reviews all the stages of DL-IDS, including data collection, log storage, log parsing, graph summarization, attack detection, and attack investigation. To accommodate current researchers, a section describing the publicly available benchmark datasets is included. This survey further discusses current challenges and potential future research directions, aiming to help researchers understand the basic ideas and visions of DL-IDS research, as well as to motivate their research interests.", + "bbox": [ + 86, + 193, + 909, + 345 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "CCS Concepts: $\\cdot$ Security and privacy $\\rightarrow$ Intrusion detection systems; $\\cdot$ Computing methodologies $\\rightarrow$ Machine learning; $\\cdot$ General and reference $\\rightarrow$ Surveys and overviews.", + "bbox": [ + 86, + 352, + 907, + 384 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Additional Key Words and Phrases: Intrusion detection systems, deep learning, survey", + "bbox": [ + 86, + 388, + 742, + 405 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ACM Reference Format:", + "text_level": 1, + "bbox": [ + 88, + 409, + 296, + 423 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao. 2025. Deep Learning-based Intrusion Detection Systems: A Survey. J. ACM 1, 1, Article 1 (October 2025), 38 pages.", + "bbox": [ + 86, + 425, + 909, + 456 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 INTRODUCTION", + "text_level": 1, + "bbox": [ + 88, + 468, + 286, + 483 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The promising Internet of Everything connects people, processes, data, and things through the Internet [51], bringing convenience and efficiency to the world. Yet its inevitable security vulnerabilities could be exploited by deliberate attackers. With increasingly sophisticated attack methods such as Advanced Persistent Threat (APT), the attackers are in a threatening position to sabotage network systems or steal sensitive data. The detection of intrusions, particularly based on DL, has consequently been a prominent topic in the cybersecurity community.", + "bbox": [ + 86, + 488, + 909, + 588 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The automated system for detecting intrusions is known as IDS. The limitations of IDS may result in terrible damage to enterprises. One example is the recent Colonial Pipeline Ransomware Attack [16]. In April 2021, the hacking group DarkSide launched a ransomware attack on Colonial Pipeline, the biggest oil pipeline company in the United States, using an unused VPN account. Due to this attack, 5,500 miles of transportation pipelines were forced to shut down, affecting nearly $45\\%$ of the fuel supply on the Eastern Coast. The Colonial Pipeline paid $4.4 million ransom money, in addition to the theft of over 100 GB of data. If the malware intrusion can be detected in time, the influence of this attack can be greatly mitigated or even eliminated.", + "bbox": [ + 86, + 588, + 909, + 722 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1.1 Tough but Bright Intrusion Detection System", + "text_level": 1, + "bbox": [ + 88, + 734, + 561, + 751 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "IDS have been increasingly challenged to effectively deal with intrusions for decades. It is noted in Figure 1(a) that the number of $\\mathrm{CVE}^1$ records has presented an accelerating uptrend, especially", + "bbox": [ + 86, + 755, + 907, + 790 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07839v3 [cs.CR] 13 Oct 2025", + "bbox": [ + 30, + 255, + 72, + 740 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Xibin Zhao is the corresponding author.", + "bbox": [ + 86, + 797, + 368, + 811 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1Common Vulnerabilities and Exposures (CVE) is a security project for security information sharing and vulnerability management. CVE is a publicly accessible database where each vulnerability has a common name and a unique identifier.", + "bbox": [ + 86, + 811, + 905, + 838 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Authors' address: Zhiwei Xu; Yujuan Wu; Shiheng Wang; Jiabao Gao; Tian Qiu; Ziqi Wang; Hai Wan; Xibin Zhao, KLISS, BNRist, School of Software, Tsinghua University, Beijing, China, zxb@tsinghua.edu.cn.", + "bbox": [ + 86, + 848, + 907, + 876 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "2025.ACM 0004-5411/2025/10-ART1", + "bbox": [ + 88, + 886, + 337, + 900 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "https://doi.org/XXXXXXXXXXXXXXXXXX", + "bbox": [ + 88, + 900, + 351, + 913 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg", + "image_caption": [ + "(a) Trend of CVE records and IDS papers.", + "Fig. 1. Recent situation of IDS." + ], + "image_footnote": [], + "bbox": [ + 96, + 140, + 485, + 297 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg", + "image_caption": [ + "(b) Category of CNNVD vulnerabilities." + ], + "image_footnote": [], + "bbox": [ + 534, + 130, + 890, + 288 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "in 2016, which suffered a sharp rise. After 2016, the number of CVE records stays growing at a high speed, reaching around 30,000 in 2024. Besides, according to the $\\mathrm{CNNVD}^2$ report shown in Figure 1(b), we can observe that almost all (i.e., $97.2\\%$ ) vulnerabilities are medium risk or above, with high and critical risk accounting for $40\\%$ of them. The growing number of vulnerabilities and the large percentage of high-risk vulnerabilities both reveal the tough situation faced by IDS.", + "bbox": [ + 86, + 386, + 907, + 468 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Nevertheless, an interesting observation from Figure 1(a) is that, against the number of CVE records, DL-IDS papers also started to emerge in 2016 and their amount grew year by year subsequently. We can notably find that the growth trend of DL-IDS papers is nearly the same as that of CVE records. The potential reason can be speculated as DL is an effective way for IDS to cope with their tough situation. Borrowing the strong generalizability from DL techniques, DL-IDS detection can be extended to zero-day intrusions that are almost impossible to detect with the traditional DL-IDS. Some studies [219, 237, 250] demonstrate this speculation. In their experiments, DL-IDS are all reported with an achievement of over $90\\%$ detection accuracy while the traditional DL-IDS sometimes only have around $50\\%$ detection accuracy.", + "bbox": [ + 86, + 469, + 909, + 618 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The IDS future is not only tough but also bright with the aid of DL - it is evident that the growth in the number of IDS papers primarily comes from those based on DL techniques. The proportion of DL-IDS papers rises from about $0\\%$ in 2016 to a very high $65.7\\%$ in 2024. This phenomenon reflects the great interests and visions of the cybersecurity community in DL-IDS. To date, the DL-IDS development has almost reached a decade, and thus, it is time, and also essential, to revisit how DL and IDS interact, identify emerging trends, and guide future research directions.", + "bbox": [ + 86, + 618, + 907, + 720 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1.2 Related Surveys and Our Scope", + "text_level": 1, + "bbox": [ + 88, + 731, + 432, + 747 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Unfortunately, none of the related surveys in the last decade have systematically investigated DL-IDS. On one hand, some related surveys may only focus on a few parts of DL-IDS, such as log parsers [138, 188, 255], datasets [201], attack modeling [10, 201], and specific DL technique type [17]. On the other hand, while several surveys [21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270] involve some DL-based approaches, they did not review DL-IDS from the perspective of DL particularly.", + "bbox": [ + 86, + 751, + 909, + 836 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Partial Investigation for DL-IDS. The surveys [10, 138, 188, 201, 255] are the typical example papers describing only a few parts of DL-IDS. Among them, Adel et al. [10] mainly studied various", + "bbox": [ + 86, + 843, + 907, + 877 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "1:2", + "bbox": [ + 90, + 84, + 113, + 95 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 97 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "2Chinese National Vulnerability Database (CNNVD) is a Chinese national database that catalogs security vulnerabilities in software and hardware products. CNNVD also provides unique identifiers and descriptions similar to CVE.", + "bbox": [ + 86, + 886, + 907, + 915 + ], + "page_idx": 1 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "techniques and solutions that were tailored to APT attacks, as well as discussed where to make the APT detection framework smart. Scott et al. [138] and Tejaswini et al. [188] dually discussed online log parsers and their applications for anomaly detection. Branka et al. [201] review APT datasets and their creation, along with feature engineering in attack modeling. Zhang et al. [255] created an exhaustive taxonomy of system log parsers and empirically analyzed the critical performance and operational features of 17 open-source log parsers. Tristan et al. [17] focused on the applications of graph neural networks (GNNs) to IDS. For DL-IDS, all the above surveys are obviously insufficient to advance research understanding and provide theoretical suggestions.", + "bbox": [ + 86, + 116, + 909, + 252 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Different Perspectives from DL-IDS. Another type of existing surveys involved DL-IDS but studied them from the other perspectives [4, 21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270]. Specifically, the surveys [105, 128] aim to give an elaborate image of IDS and comprehensively explain methods from signature checking to anomaly detection algorithms. Originating from log data, the survey [83] presented a detailed overview of automated log analysis for reliability engineering and introduced three tasks including anomaly detection, failure prediction, and failure diagnosis. In survey [162], Nasir et al. explored the efficacy of swarm intelligence on IDS and highlighted the corresponding challenges in multi-objective IDS problems.", + "bbox": [ + 86, + 259, + 909, + 393 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Additionally, data types inspire and contribute significantly to the related surveys, whose categories include host-based IDS (HIDS) [21, 127, 140, 150, 270] and network-based IDS (NIDS) [4, 163]. Bridges et al. [21] focused on IDS leveraging host data for the enterprise network. Martins et al. [150] brought the HIDS concept to the Internet of Things. As a representative form of data in HIDS, the provenance graph [127, 140, 270] and its reduction techniques [96] were also extensively studied in survey literature. In NIDS, Nassar et al. [163] studied the techniques of network intrusion detection, especially those with machine learning (ML). Ahmad et al. [4] further incorporated ML and DL into their NIDS survey and studied the downstream learning methods duallyedly.", + "bbox": [ + 86, + 393, + 909, + 525 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The above surveys, however, lack investigation and discussion about DL-IDS. DL techniques are only what they cover or involve, rather than the primary focus of their research.", + "bbox": [ + 86, + 526, + 907, + 559 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Scope of Our Survey. Our work distinguishes the related surveys by providing a comprehensive literature review of DL-IDS. From the perspective of DL, our survey elaborates on a common workflow of DL-IDS and introduces the corresponding taxonomies of all modules within this workflow. Moreover, our survey discusses the possible challenges and research visions for DL-IDS, which include many DL-related issues that have not yet been studied by the existing surveys.", + "bbox": [ + 86, + 568, + 909, + 652 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1.3 Contributions and Organization", + "text_level": 1, + "bbox": [ + 88, + 665, + 440, + 681 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In summary, this survey makes the following contributions:", + "bbox": [ + 86, + 686, + 594, + 702 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Realizing that IDS has made significant progress with the aid of DL over the last decade, we present a thorough survey for DL-IDS, formalizing its definition and clarifying its location among other types of IDS.", + "- We outline the common workflow for DL-IDS, consisting of the data management stage and intrusion detection stage. We further systematically illustrate the research advances in all modules of this workflow and innovatively taxonomize the papers based on DL techniques", + "- From the perspective of DL, we discuss the potential challenges and future directions for DL-IDS, especially highlighting the ones unique to DL-IDS for accommodating current researchers." + ], + "bbox": [ + 119, + 706, + 903, + 852 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Survey Structure. Section 2 introduces the survey methodology of this work. Section 3 describes the background knowledge about DL-IDS. Section 4 and Section 5 elaborate the recent research trends on data management stage and intrusion detection stage, respectively. Section 6 illustrates", + "bbox": [ + 86, + 865, + 909, + 915 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "1:3", + "bbox": [ + 884, + 84, + 907, + 95 + ], + "page_idx": 2 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg", + "image_caption": [ + "Fig. 2. Source distribution of references." + ], + "image_footnote": [], + "bbox": [ + 115, + 125, + 588, + 281 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg", + "image_caption": [ + "Fig. 3. Types of IDS." + ], + "image_footnote": [], + "bbox": [ + 623, + 130, + 884, + 270 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the benchmark datasets and their feature dimensions. Section 7 discusses the visions and challenges for future research. Lastly, the conclusion is presented in Section 8.", + "bbox": [ + 86, + 338, + 907, + 373 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2 SURVEY METHODOLOGY", + "text_level": 1, + "bbox": [ + 88, + 387, + 376, + 402 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To start our literature review, we selected several popular literature databases, including Web of Science [12], IEEE Xplore [95], and Scopus [50], as the search engine. For search keywords, we determined from generalized terms associated with DL-IDS, such as intrusion detection system, attack investigation, anomaly detection, threat detection, Advanced Persistent Threats, data provenance analysis, forensic analysis, causality analysis, log collection, log compression, log parsing, log storage, and log summarization. Then, we employed Connected Papers [168], a visual tool that assists researchers in finding relevant academic papers, to ensure that we did not overlook the typical related literature. Since the found literature is numerous and rather generalized for the DL-IDS scope, we carefully checked their topics and prioritized only academic papers that are highly related. Finally, all these papers were filtered based on the impact factors of their published journals or academic conferences, leaving us a total of 131 papers.", + "bbox": [ + 86, + 408, + 909, + 590 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We identified a few venues that have published many significant papers in the field of DL-IDS, such as Usenix Security, S&P, CCS, NDSS, TIFS, TDSC, ICSE, ASE, ESEC/FSE, TSE, OSDI, NSDI, EuroSys, SOSP, ATC, ICML, KDD, WWW, TKDE, ICDE, and SCIS. We broadly divide them into five categories: security, software, system, data, and interdisciplinary. The distribution of these papers with their published years is reported in Figure 2.", + "bbox": [ + 86, + 591, + 909, + 675 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 BACKGROUND", + "text_level": 1, + "bbox": [ + 88, + 688, + 273, + 704 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Intrusion Detection System", + "text_level": 1, + "bbox": [ + 88, + 709, + 397, + 726 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1.1 Definition of IDS. IDS have long been a central issue in the cybersecurity community, whose research can be traced back to the 1990s [181] or even earlier. According to the existing literature [64, 128, 162, 163, 181, 236], IDS can be defined progressively as follows:", + "bbox": [ + 86, + 730, + 907, + 781 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Definition 3.1. (Intrusion Detection System). Intrusion detection system is a software or hardware system to automate the process of intrusion detection.", + "bbox": [ + 86, + 791, + 907, + 825 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Definition 3.2. (Intrusion Detection). Intrusion detection is the process of monitoring and analyzing the events occurring in a computer or a network for signs of intrusions.", + "bbox": [ + 86, + 836, + 907, + 869 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Definition 3.3. (Intrusion). Intrusion is the attempt to undermine the confidentiality, integrity, and availability of a computer or a network, or to circumvent its security facilities.", + "bbox": [ + 86, + 880, + 907, + 913 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "1:4", + "bbox": [ + 90, + 84, + 113, + 95 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 97 + ], + "page_idx": 3 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1.2 Types of IDS. Generally, IDS can be further categorized into various types based on their data sources [270]. Well-known types include NIDS, HIDS, and Provenance-based IDS (PIDS). Figure 3 depicts IDS types, their data sources, and the location of DL-IDS within those IDS types.", + "bbox": [ + 86, + 118, + 905, + 168 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.4. (NIDS). NIDS are IDS whose data sources are network traffic between hosts.", + "bbox": [ + 106, + 177, + 874, + 193 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "NIDS takes network traffic between hosts as its input. It is usually deployed at the edge or key node of the network, allowing it to secure the whole computer system with limited data. Benefiting from the global perception of the whole computer system, NIDS does well in large-scale multi-host intrusions such as Distributed Denial-of-Service (DDoS) attacks. However, NIDS performs poorly in intra-host intrusions and is difficult to analyze intrusions in the form of encrypted network traffic.", + "bbox": [ + 86, + 202, + 911, + 287 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.5. (HIDS). HIDS are IDS whose data sources are system events within hosts.", + "bbox": [ + 106, + 294, + 853, + 311 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "HIDS, in contrast, uncovers intrusions through system events of individual hosts. Its data sources include file system changes, system calls, process activities, etc. HIDS can conduct comprehensive detection for a host, and is not affected by encrypted data since the decryption is also performed in the host. Nevertheless, the deployment and maintenance of HIDS is relatively difficult. HIDS should be adapted to hosts of different operating systems and runtime environments. This simultaneously introduces computation overhead for the hosts.", + "bbox": [ + 86, + 319, + 907, + 418 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.6. (PIDS). PIDS are HIDS whose data sources are data provenance.", + "bbox": [ + 106, + 427, + 765, + 444 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.7. (Data Provenance). Data provenance refers to the origin and the processes that an event has undergone from its creation to its current state.", + "bbox": [ + 86, + 452, + 907, + 486 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "PIDS is a subtype of HIDS, particularly referring to HIDS that utilizes data provenance as its data source. Due to analysis in the intact trail of events, PIDS is proven effective in coping with advanced attacks [270]. By performing causality analysis on data provenance, PIDS can significantly reduce false alarms. Yet, data provenance is very expensive to obtain, requiring complicated technical tools for monitoring operating systems, network protocols, and applications.", + "bbox": [ + 86, + 495, + 907, + 577 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Definition 3.8. (DL-IDS.) DL-IDS are IDS that utilize DL techniques to detect intrusions, whose data sources can be network traffic between hosts, system events within hosts, or their combination.", + "bbox": [ + 86, + 587, + 909, + 620 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Unlike the other types of IDS such as NIDS and HIDS are categorized by their data sources, DL-IDS is defined by the techniques used in intrusion detection. As shown in Figure 3, the data source of DL-IDS can be network traffic, system events, or both. Taking advantage of the generalizability of DL techniques, DL-IDS is allowed to handle zero-day attacks precisely and thus become extremely interested in the cybersecurity community recently.", + "bbox": [ + 86, + 629, + 911, + 712 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Common Workflow", + "text_level": 1, + "bbox": [ + 88, + 725, + 329, + 738 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Figure 4 depicts the common workflow of DL-IDS. It usually consists of 7 steps: raw data, collection, storage, parsing, summarization, detection, and investigation, which are explained as follows:", + "bbox": [ + 86, + 745, + 909, + 779 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Raw Data is unprocessed data for uncovering attack details or benign system behaviors. The raw data analyzed by cyber experts commonly include network traffic and audit logs.", + "- Collection indicates data collection tools for different systems, such as cloud and cross-platforms, which gather valuable raw data to describe important system behavior scenarios.", + "- Storage involves storage and search engines to manage large amounts of collected log data. Log data is labeled with indexes for efficient retrieval.", + "- Parsing is the act of analyzing the stored logs and other useful data. It extracts and organizes the underlying information within the data for subsequent processing." + ], + "bbox": [ + 119, + 781, + 907, + 913 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 504, + 95 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "1:5", + "bbox": [ + 884, + 84, + 907, + 95 + ], + "page_idx": 4 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg", + "image_caption": [ + "Fig. 4. Common workflow of DL-IDS." + ], + "image_footnote": [], + "bbox": [ + 106, + 127, + 890, + 418 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Summarization refers to the operation of summarizing large volumes of parsed data based on its semantics. This reduces storage costs while preserving critical events.", + "- Detection is the process of using detection tools such as models and algorithms to detect anomalies in analyzed data to determine whether the data contains intrusions.", + "- Investigation is the further process of Detection. It reconstructs the entire attack scenarios from the detected malicious data by analyzing the causal relationship between them." + ], + "bbox": [ + 119, + 480, + 905, + 580 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Note that DL-IDS can also be performed in other step orders by skipping some of the steps. For example, log data can be first parsed before storage [135]. Attack investigation can be directly conducted without detection of intrusions [9]. This survey is organized by the common workflow.", + "bbox": [ + 86, + 581, + 909, + 633 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 DATA MANAGEMENT", + "text_level": 1, + "bbox": [ + 88, + 645, + 335, + 659 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "This section elaborates on the data management stage of DL-IDS, including data collection (Section 4.1), log storage (Section 4.2), and log parsing (Section 4.3).", + "bbox": [ + 86, + 665, + 909, + 700 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Data Collection", + "text_level": 1, + "bbox": [ + 88, + 712, + 286, + 726 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The first step of DL-IDS is to collect useful data from raw data. Raw data indicates records that document events, activities, and operations that occur within a system, application, or network (a.k.a., logs), represented by audit logs or application logs within hosts, or network traffic between hosts. By collecting useful logs, DL-IDS is allowed to monitor the health condition and operational status of information systems [141, 255]. Common attributes of logs include timestamp, event type, subject, object, description, etc.", + "bbox": [ + 86, + 731, + 907, + 831 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "On different platforms, logs possess different formats and organizational structures [21, 127, 255, 270]. To counter this, researchers have created various log collection tools specialized for various systems. For example, in Windows systems, Event Viewer is employed to manage system logs. Yet in Linux systems, log files are usually saved in the /var/log/ directory. The classification of data collection tools is shown in Table 1, including Windows, Linux, Cloud, and Cross platforms.", + "bbox": [ + 86, + 831, + 909, + 915 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "1:6", + "bbox": [ + 92, + 84, + 113, + 94 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 5 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/d77f3601530f6283f668e7c2a7916f80f8b6049a2d4b3f3fdea4dbac64ee1bf2.jpg", + "table_caption": [ + "Table 1. Log collection tools on different platforms." + ], + "table_footnote": [], + "table_body": "
Platform TypeToolDescription
Windows platformETW [153]Providing developers comprehensive event tracing ability
Panorama [245]Hardware-level and OS-aware dynamic taint tracking
Linux platformauditd [68]Native tools supported by the Linux kernel
sysdig [106]Focusing on runtime monitoring and fault troubleshooting
CamFlow [170]Self-contained, easily maintainable implementation
Tracee [210]Exposing system information as events based on eBPF
DataTracker [200]Monitoring unmodified binaries without their source codes
Inspector [206]Parallel provenance library that is POSIX-compliant
AutoLog [94]Analyzing programs so no need to run them
eAudit [193]Fast, scalable and easily deployable data collection tools
Cloud platformK8S tools [27, 87]Adapting to cloud scenarios to meet enterprise needs
saBPF [129]An extension tool of eBPF for containers in cloud computing
ISDC [158]Eliminating overheads on in-network resources
Cross platformDTrace [66]Real-time tracing framework that supports many platforms
SPADE [61]Novel provenance kernel for cross-platform logging
", + "bbox": [ + 137, + 145, + 858, + 388 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1.1 Windows Platform Tools. Event Tracing for Windows (ETW) [153] is a powerful event tracing mechanism provided by Microsoft. It consists of three components: providers, controllers, and consumers. ETW instruments applications to provide kernel event logging and allows developers to start and stop event tracing sessions momentarily. Panorama [245] exploits hardware-level and OS-aware dynamic taint tracking to collect logs. Moreover, it develops a series of automated tests to detect malware based on several kinds of anomalous behaviors.", + "bbox": [ + 86, + 413, + 905, + 512 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1.2 Linux Platform Tools. auditid [68] is a native collection tool supported by the Linux kernel, which is responsible for writing audit logs to disk and monitoring a variety of auditable events such as system calls, file accesses, and modifications. sysdig [106] relies on the kernel module to achieve monitoring and data collection of the system. sysdig focuses on system runtime monitoring and fault troubleshooting, which is also widely used in containers and cloud-native environments. CamFlow [170] designs a self-contained, easily maintainable implementation of whole-system provenance based on Linux Security Module, NetFilter, and other kernel facilities. Furthermore, it provides a mechanism to adapt the captured data provenance to applications and can be integrated across distributed systems. Tracee [210] takes advantage of the extended Berkeley Packet Filter (eBPF) framework to observe systems efficiently. It uses eBPF to tap into systems and expose that information as events. DataTracker [200] is an open-source data provenance collection tool using dynamic instrumentation. It is able to identify data provenance relations of unmodified binaries without access to or knowledge of the source codes. Inspector [206] is a Portable Operating System Interface (POSIX)-compliant data provenance library for shared-memory multi-threaded applications. It is implemented as a parallel provenance algorithm on a concurrent provenance graph. AutoLog [94] generates runtime log sequences by analyzing source codes and does not need to execute any programs. It can efficiently produce log datasets (e.g., over 10,000 messages/min on Java projects) and has the flexibility to adapt to several scenarios. eAudit [193] is a scalable and easily deployable data collection tools. eAudit relies on the eBPF framework built into recent Linux versions, making it work out of the box on most of the Linux distributions.", + "bbox": [ + 86, + 523, + 907, + 855 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1.3 Cloud Platform Tools. Although some collection tools in Windows and Linux platforms such as auditd [68], sysdig [106], and Tracee [210] can be applied in cloud computing environment, cloud-native scenarios introduce different challenges compared with Windows or Linux platforms. First,", + "bbox": [ + 86, + 863, + 911, + 915 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "1:7", + "bbox": [ + 884, + 83, + 907, + 94 + ], + "page_idx": 6 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "there are many different types of components such as containers, microservices, and Kubernetes (K8S) clusters in cloud platforms, each of which generates its own logs with varying formats and contents. Additionally, components are basically characterized by dynamic expansion and contraction, making it hard to capture complete log data. To address them, Chen et al. [27] design a cloud log collection architecture on the basis of K8S, which is a central platform based on cloud-native technology. Josef et al. [87] propose a log collection and analysis tool operated as Software as a Service (SaaS) in the cloud environment in K8S technology, aiming to provide comprehensive logs across all microservices. saBPF [129] is an extension tool of eBPF, aiming to deploy fully-configurable, high-fidelity, system-level audit mechanisms at the granularity of containers. saBPF is further developed with proof-of-concept IDS and access control mechanism to demonstrate its practicability. ISDC [158] is designed to eliminate the bottleneck between network infrastructure (where data is generated) and security application servers (where data is consumed), which prioritizes specific flows to effectively optimize resource consumption.", + "bbox": [ + 90, + 116, + 907, + 333 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.1.4 Cross-platform Tools. To effectively detect intrusions, an intuitive idea is to incorporate log data from various platforms to obtain a global view of the running system. DTrace [66] is a real-time dynamic tracing framework for troubleshooting kernel and application problems on production systems. It supports many platforms, including Linux, Windows, Solaris, macOS, FreeBSD, NetBSD, etc. Support for Provenance Auditing in Distributed Environments (SPADE) [61] develops a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It supports heterogeneous aggregating for system-level data provenance for data analysis across multiple platforms.", + "bbox": [ + 90, + 341, + 907, + 475 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2 Log Storage", + "text_level": 1, + "bbox": [ + 90, + 488, + 251, + 504 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The subsequent step of log collection is to store these logs [11, 40]. We will introduce two essential components for data storage: log storage systems and compression algorithms for these systems.", + "bbox": [ + 90, + 506, + 907, + 541 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2.1 Log Storage Systems. The two most commonly used log storage systems are ELK [5] and Loki [15]. ELK is a powerful log management solution consisting of three open-source software components: Elasticsearch [48], Logstash [47], and Kibana [49]. Elasticsearch [48] is the leading distributed, RESTful search and analytics data engine designed with speed and scalability. Logstash [47] is a server-side data preprocessing pipeline to collect and integrate data from multiple sources. Kibana [49] is a data analytics and visualization platform at both speed and scale. ELK is powerful enough to be applied in enterprise scenarios, however, its performance comes at a price. ELK sacrifices ease of configuration and installation, and may simultaneously introduce severe runtime overhead for its hosts. In contrast, Loki [15] is a lightweight logging system with low resource overhead developed by Grafana Labs. It is designed with simple operations and efficient storage. Instead of indexing everything of data like ELK does, Loki mainly creates indices grounded in log labels. Moreover, Loki is well suited for open-source monitoring and visualization tools such as Prometheus [174] and Grafana [112]. Integrating these two tools enables Loki to construct a complete monitoring and log analysis platform for information systems.", + "bbox": [ + 90, + 550, + 907, + 781 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2.2 Log Compression Algorithms. Logs are generated quickly and require significant memory usage. For example, it is measured that a browser can produce about 10 GB of log data each day [40]. Such oversize data should be compressed before storage. Log compression algorithms can be categorized into two types: general-purpose algorithms and those specifically adapted to log data.", + "bbox": [ + 90, + 790, + 907, + 856 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "General Compression Algorithms. General compression algorithms refer to algorithms to reduce the size of data (e.g., log data) by handling token-level or byte-level duplicates in the data. General compression algorithms can be classified into three categories based on their principles [242]:", + "bbox": [ + 90, + 865, + 907, + 915 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "1:8", + "bbox": [ + 92, + 84, + 113, + 94 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 7 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 90, + 933, + 512, + 945 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/9040e5a3f950e1068d51ac6479bef0d55230b78324d6f4f477c0fa04f8c2b271.jpg", + "table_caption": [ + "Table 2. Well-acknowledged general compression algorithms for log data." + ], + "table_footnote": [], + "table_body": "
TypeWell-acknowledged compression algorithm
Dictionary-basedLZ77 in gzip [55], LZMA in 7zip_lzma [171], and LZSS in quickLZ [177]
Sorting-basedBWT in zip2 [194] andST in szip [190]
Statistical-basedPPMD in 7zip(ppmd and DMC in ocamyd [191]
", + "bbox": [ + 176, + 145, + 820, + 220 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Dictionary-based Compression: It records repeated data as keys and replaces these data with their corresponding keys.", + "- Sorting-based Compression: It sorts data to enable strategies that require ordering features.", + "- Statistical-based Compression: It exploits statistical techniques to learn and predict the possible next token for existing tokens. The data is thus compressed as a statistical model." + ], + "bbox": [ + 119, + 244, + 907, + 327 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Table 2 presents representative algorithms of the above three types. Due to the indeterminacy of statistical techniques, statistical-based compression algorithms may introduce losses in compression. Yet the other two types of algorithms are generally lossless. By validating 9 log files and 2 natural language files, a study [242] shows that some general compression algorithms can achieve high compression ratios for log data and log data is even easier to compress than natural language data.", + "bbox": [ + 86, + 331, + 909, + 415 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Tailored Compression Algorithms. Different from natural language data, log data usually has specific structures and formal expressions that help further compression. Yao et al. [243] propose LogBlock, which obtains small log blocks before compression and then uses a generic compressor to compress logs. Liu et al. [135] propose Logzip, which employs clustering algorithms to iteratively extract templates from raw logs and then obtain coherent intermediate representations for compressing logs. Rodrigues et al. [186] propose the lossless compression tool CLP, aiming to quickly retrieve log data while meeting compression requirements. CLP proposes to combine domain-specific compression and search with a generic lightweight compression algorithm. Li et al. [123] conduct empirical research on log data and propose LogShrink to overcome their observed limitations by leveraging the commonality and variability of log data. LogBlock [243] is designed to help existing jobs perform better. It reduces duplicate logs by preprocessing log headers and rearranging log contents, thereby improving the compression ratio of log files. LogReduceer [247] is a framework that combines log hotspot identification and online dynamic log filtering. Its non-intrusive design significantly reduces log storage and runtime overhead. $\\mu$ Slope [217] is a compression and search method for semi-structured log data. It achieves efficient storage and query performance through data segmentation, pattern extraction, and index-free design. Denum [249] significantly improves log compression rates by optimizing the compression of digital tokens in logs. It is an efficient log compression tool suitable for scenarios where you need to save storage space or transmission bandwidth.", + "bbox": [ + 86, + 423, + 909, + 738 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.3 Log Parsing", + "text_level": 1, + "bbox": [ + 88, + 752, + 253, + 769 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Log data often originates from multiple different devices such as terminals, sensors, and network devices. To analyze it, log parsers are employed to format them into structured and unified ones. Log parsing is usually executed by data classification and template extraction. Data classification is to classify log data into several groups. Each group constitutes a template for extracting features from log data and constructing the structured logs. As shown in Figure 5, the existing log parsers can be taxonomized into 3 categories: clustering-based, pattern-based, and heuristic-based parsers.", + "bbox": [ + 86, + 772, + 909, + 875 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.3.1 Clustering-based Parsing. Clustering-based parsers classify data using clustering algorithms for log parsing. Xiao et al. [226] propose LPV, which employs a hierarchical clustering algorithm", + "bbox": [ + 86, + 881, + 907, + 916 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "1:9", + "bbox": [ + 884, + 83, + 907, + 95 + ], + "page_idx": 8 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg", + "image_caption": [ + "Fig. 5. Taxonomy of data parsing." + ], + "image_footnote": [], + "bbox": [ + 129, + 116, + 874, + 220 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "to incrementally group logs based on Euclidean distance. Hamooni et al. [74] present a rapid log pattern recognition approach named LogMine. It is implemented in the map-reduce framework for distributed platforms to process millions of log messages in seconds. LogCluster [130] reduces the number of logs that need to be manually checked and improves the accuracy of problem identification through log clustering and the use of knowledge bases. METING [32] provides a robust and efficient log parsing method through frequent n-gram mining and flexible log grouping strategy, which can effectively process various types of log data.", + "bbox": [ + 86, + 280, + 909, + 400 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.3.2 Frequency-based Parsing. Frequency-based parsers discover patterns that exceed the frequency threshold and employ the mined patterns to parse logs. Sedki et al. [192] propose the log parsing tool ULP, which combines string matching and local frequency analysis to efficiently parse large log files. Dai et al. [35] propose Logram, which utilizes an n-gram dictionary for log parsing. For n-grams with a frequency below the threshold, Logram recursively converts to (n-1)-grams until a list of uncommon 2-grams is obtained. To mitigate the parameter sensitivity issue in log parsers, Dai et al. [36] further proposed an entropy-based log parser PILAR, which balances parsing accuracy and efficiency. Xu et al. [229] propose a hybrid log parsing model called Hue, which performs parsing through user-adaptive methods. Prefix-Graph [30] is an efficient, adaptive, and universal log parsing method that can stably extract log templates without relying on domain knowledge and manual parameter tuning.", + "bbox": [ + 86, + 406, + 909, + 591 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.3.3 Heuristic-based Parsing. Heuristic-based parsers rely on empirical knowledge to classify log data. He et al. [82] propose the online log parsing method Drain, which employs a depth-fixed parsing tree to group the original logs and encodes them using specially designed parsing rules. Le et al. [114] propose to use a hint-based few-sample learning algorithm, LogPPT, to capture log template patterns. Utilizing new prompt tuning methods and an adaptive random sampling algorithm, LogPPT performs well on multiple public datasets. Liu et al. [137] propose the UniParser parser to address the issue of difficult processing of heterogeneous logs, using the Token Encoder and Context Encoder modules to learn log context features. Spell [44] is an efficient streaming log parsing method that can dynamically extract log patterns in online processing and significantly improve processing efficiency through pre-filtering steps. Logan [3] achieves efficient and scalable log parsing through distributed processing, LCS matching, dynamic matching tolerance, and periodic merging. USTEP [214] is an online log parsing method based on an evolutionary tree structure that can discover and encode new parsing rules. It achieves constant parsing time and can efficiently parse raw log messages in a streaming manner.", + "bbox": [ + 86, + 598, + 909, + 833 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "5 INTRUSION DETECTION", + "text_level": 1, + "bbox": [ + 88, + 844, + 362, + 859 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "The intrusion detection stage uncovers intrusions relying on the semantic-level information. This section classifies and summarizes the mainstream graph summarization (Section 5.1), attack detection (Section 5.2), and attack investigation (Section 5.3).", + "bbox": [ + 86, + 865, + 909, + 915 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "1:10", + "bbox": [ + 90, + 83, + 119, + 95 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 9 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 9 + }, + { + "type": "table", + "img_path": "images/dcec85fafc03c55d917b54a234d99e02c0338d0d6ed1ae0535780c1185341cbd.jpg", + "table_caption": [ + "Table 3. Overview of graph summarization approaches." + ], + "table_footnote": [], + "table_body": "
ModeApproachReleaseBaselineRequirement
OfflineProvCompress [228]2011No SummarizationNone
BEEP [115]2013No SummarizationInstrumentation
LogGC [116]2013BEEP + No SummarizationInstrumentation
CPR + PCAR [234]2016No SummarizationNone
FD + SD [89]2018CPR + PCARNone
LogApprox [152]2020GC + CPR + DPRNone
TeRed [122]2025LogGC + CPR + PCAR + F-DPR + NodeMergeNone
OnlineProTracer [143]2016BEEP + No SummarizationInstrumentation
NodeMerge [205]2018No SummarizationNone
Winnower [77]2018No SummarizationNone
GS + SS [267]2021FD + SDNone
SEAL [53]2021FDNone
FAuST [97]2022CPR + DPRNone
AudiTrim [202]2024CPR + GS + F-DPRNone
", + "bbox": [ + 102, + 145, + 895, + 375 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "5.1 Graph Summarization", + "text_level": 1, + "bbox": [ + 88, + 391, + 349, + 406 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "It is illustrated that stealthy malware will inevitably interact with the underlying OS and be captured by provenance monitoring systems [216], which is the reason why PIDS (a form of DL-IDS) has worked and flourished recently. Log data generated from provenance monitoring systems is referred to as data provenance as mentioned. Offering advantages in high precision, data provenance sacrifices memory performance to record all trails of events from their creations to their current states, even some of which are trivial. Unlike network traffic and application logs, data provenance is fine-grained, detailed, and rich in semantics. As a result, the token-level or byte-level log storage systems (Section 4.2.1) and log compression algorithms (Section 4.2.2) are insufficient to handle the memory efficiency of data provenance due to the absence of semantic-level information.", + "bbox": [ + 86, + 411, + 909, + 559 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "To this end, graph summarization is investigated to further reduce the size of log data semantically. In graph summarization, data provenance is transformed into a provenance graph, of which the causal relations are utilized to build the semantic understanding of system activities. Referring to the definition of data provenance (Definition 3.7), provenance graph is defined as follows:", + "bbox": [ + 86, + 561, + 909, + 627 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Definition 5.1. (Provenance Graph). Provenance graph is a representation of a collection of data provenance with causal relations. It is a directed acyclic graph $G = \\langle V, E \\rangle$ where nodes $V$ are system entities and edges $E$ are system events.", + "bbox": [ + 86, + 634, + 907, + 683 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Provenance graphs allow graph summarization approaches to reduce the size of log data by confidently removing irrelevant events, aggregating similar events, gathering similar execution entities, etc. This categorizes them as a type of lossy reduction, yet the aforementioned log storage and compression are usually lossless (except for statistical-based log compression). We note that some surveys (e.g., [96, 270]) may interchangeably use graph summarization and log compression to identify the approaches that reduce the size of log data. In this work, we explicitly distinguish them and refer to the lossless reduction as compression and the opposite one as summarization. Table 3 presents the overview of graph summarization approaches. We classify them into two categories: offline graph summarization and online graph summarization.", + "bbox": [ + 86, + 691, + 909, + 843 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "5.1.1 Offline Graph Summarization. Offline graph summarization requires historical log data to provide global knowledge, which extracts log data from persistent storage, summarizes the data, and pushes back the summarized data to the persistent storage. In 2011, Xie et al. [228] take inspiration from web graphs to summarize provenance graphs. They argue that provenance", + "bbox": [ + 86, + 848, + 909, + 916 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "1:11", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 10 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "graphs have similar organizational structure and characteristics to web graphs, such as locality, similarity, and consecutiveness. BEEP [115] is developed based on the fact that a long-running execution can be partitioned into individual units. BEEP reverse engineers application binaries and instructions to perform selective logging for unit boundaries and unit dependencies. LogGC [116] is a summarized audit log system that can be invoked at any time during the system execution. Xu et al. [234] propose an aggregation algorithm PCR that preserves event dependencies during log data reduction. They further propose an algorithm named PCAR that utilizes domain knowledge to conduct graph summarization. Hossain et al. [89] propose two dependency-preserving graph summarization approaches, FD and SD. FD is allowed to keep backward and forward forensic analysis results. SD preserves the results of common forensic analysis, which runs backward to find the entry points of intrusions and then runs forward from these points to unveil their impacts. LogApprox [152] aims to summarize the most space-intensive events found in logs, namely file I/O activity, which can account for up to $90\\%$ of the log content. TeRed [122] employs unit tests to learn the system's normal behavior patterns for reducing provenance graphs, allowing it not to impact attack detection and investigation.", + "bbox": [ + 90, + 116, + 905, + 366 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "5.1.2 Online Graph Summarization. Online graph summarization performs real-time summarization for continually coming provenance graphs, rather than dealing with a static provenance graph. ProTracer [143] alternates between system event logging and unit-level taint propagation. It has a lightweight kernel module and user space daemon for concurrent, out-of-order event processing. NodeMerge [205] is a template-based graph summarization system for online event storage. It can directly work on the system-dependent provenance streams and compress data provenance via read-only file access patterns. Winnower [77] is an extensible audit-based cluster monitoring system. For tasks replicated across nodes in distributed applications, it can define a model over audit logs to concisely summarize the behaviors of multiple nodes, thus eliminating the necessity of transmitting redundant audit records to the central monitoring node. The approach proposed by Zhu et al. [267] includes two real-time graph summarization strategies. The first strategy maintains global semantics, which identifies and removes redundant events that do not affect global dependencies. The second strategy is based on suspicious semantics. SEAL [53] is a novel graph summarization approach for causal analysis. Based on information-theoretic observations of system event data, it achieves lossless compression and supports real-time historical event retrieval. FAuST [97] is a logging daemon that performs transparent and modular graph summarization directly on system endpoints. FAuST consists of modular parsers that parse different audit log formats to create a unified in-memory provenance graph representation. AudiTrim [202] is an efficient graph summarization approach that reduces log sizes without impacting user experiences, which allows adaptable deployment on different operating systems.", + "bbox": [ + 90, + 391, + 905, + 725 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "5.2 Attack Detection", + "text_level": 1, + "bbox": [ + 90, + 752, + 294, + 768 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Attack detection is located at the central position of DL-IDS. The objective of attack detection is to accurately identify malicious system events in log data while minimizing false alarms of normal system behaviors. Based on the types of log data, we categorize the attack detection approaches into audit log-based, application log-based, network traffic-based, and hybrid log-based detectors.", + "bbox": [ + 90, + 775, + 905, + 840 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "The overview and taxonomy of attack detection approaches are presented in Table 4. We note that recent years have also published many other academic papers for attack detection [25, 46, 78, 119, 156, 218, 224, 227, 248]. Yet these papers are slightly related to DL-IDS, which are thus excluded in our survey for conciseness.", + "bbox": [ + 90, + 840, + 905, + 906 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "1:12", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 905, + 95 + ], + "page_idx": 11 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 90, + 934, + 510, + 945 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/12bf721c67f5ee3bf4f30ea97ecba8aaa579e91d4838ce00894cb8540fa17426.jpg", + "table_caption": [ + "Table 4. Overview and taxonomy of attack detection approaches." + ], + "table_footnote": [], + "table_body": "
Data TypeTaxonomyApproachRelease TimeBase ModelDetection StyleDetection Granularity
Audit LogTraditional LearningStreamSpot [145]2018K-MedoidsOnlineSubgraph
Unicorn [76]2020K-MedoidsOnlineNode, Subgraph
DistDet [42]2023HSTOnlineSubgraph
Velox [18]2025FCNOnlineNode
Graph Neural NetworkShadeWatcher [250]2022TransROfflineNode
threaTrace [219]2022GraphSAGEOnlineNode
ProGrapher [237]2023graph2vecOnlineSubgraph
MAGIC [99]2024GATOnlineNode, Subgraph
Flash [182]2024GraphSAGEOnlineNode
R-caid [65]2024GNNOfflineNode
Argus [230]2024MPNN, GRU-Node
TAPAS [252]2025LSTM-GRUOnlineTask
Application LogTraditional LearningWei et al. [231]2009PCA, TF-IDF-Log Entry
Bodik et al. [19]2010Logistic RegressionOnlineLog Entry
AMOD [43]2018SVM HYBRIDOnlineLog Entry
Sequence Neural NetworkDeepLog [45]2017LSTMOnlineLog Entry
LogRobust [257]2019Attention LSTM-Log Entry
LogAnomaly [151]2019template2vec, LSTMOnlineLog Entry
LogC [246]2020LSTMOnlineLog Entry
NeuralLog [113]2021BERT-Log Entry
PLELog [238]2021Attention GRUOnlineLog Entry
SpikeLog [175]2023DSNN-Log Entry
LogCraft [254]2024Meta Learning-Log Entry
Tweezers [33]2024GATv2, BERTweetOnlineLog Entry
LogSer [23]2024BERTOnlineLog Entry
LogDLR[265]2025Transformer, SBERTOnlineLog Entry
Traffic LogTraditional LearningNetPro [121]2017Merkle Hash TreeOnlineRoute
CATH [72]2019Cusp ModelOnlineFlow
Whisper [56]2021K-Means-Host
SigML++ [211]2023ANN-Encrypted Log
OADSD [253]2023Isolation ForestOnlinePacket
LtRFT [204]2023LambdaMARTOfflinePacket
AGC [225]2025Clustering-Packet
Graph and Sequence Neural NetworkKitsune [159]2018AutoEncoderOnlinePacket
MT-FlowFormer [260]2022Transformer-Flow
I²RNN [199]2022I²RNN-Packet
ERNN [262]2022ERNN-Flow
Euler [108]2023GNN, RNN-Flow
pVoxel [58]2023--Packet, Flow
NetVigil [91]2024E-GraphSage-Flow
Exosphere [57]2024CNN-Packet
DFNet [263]2024DFNet-Packet
RFH-HELAD [264]2024RPGAN, Deep kNN-Packet
ReTrial [259]2024Bayesian InferenceOnlineFlow
HEN [221]2024AE-LSTM-Packet, Flow
TCG-IDS [222]2025TGNOnlineFlow
A-NIDS[251]2025Stacked CTGANOnlineFlow
GTAE-IDS[62]2025Graph TransformerOnlinePacket, Flow
HybridHybridOWAD [75]2024AutoencoderOnlineHybrid
FG-CIBGC [165]2025DisenGCN, ICL-Hybrid
", + "bbox": [ + 94, + 145, + 901, + 887 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 502, + 95 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "1:13", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 12 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 947 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "5.2.1 Audit Log-based Detectors. Audit logs are collected from hosts and thus detectors based on them are basically referred to as HIDS. Audit logs provide fine-grained information through provenance graphs to depict system behaviors. Depending on the learning techniques, audit log-based detectors can be further classified as traditional learning and graph neural network.", + "bbox": [ + 86, + 118, + 909, + 184 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Traditional Learning. Traditional learning-based detectors refer to those that utilize naive machine learning techniques. StreamSpot [145] is a clustering-based anomaly detection that tackles challenges in heterogeneity and streaming nature. Unicorn [76] is a real-time intrusion detector that efficiently constructs a streaming histogram to represent the history of system executions. The counting results within the histogram are updated immediately if new edges (or events) occur. DistDet [42] is a distributed detection system that builds host models in the client side, filters false alarms based on their semantics, and derives global models to complement the host models. Velox [18] derives from Orthrus and replaces the complex TGN-based encoder with a simple fully-connected network (FCN), leading to a lightweight and efficient neural network.", + "bbox": [ + 86, + 191, + 909, + 343 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Graph Neural Network. GNN is demonstrated to do well in processing provenance graphs [99, 182, 219, 237, 250]. ProGrapher [237] extracts temporal-ordered provenance graph snapshots from the ingested logs, and applies whole graph embedding and sequence-based learning to capture rich structural properties of them. The key GNN technique leveraged by ProGrapher is graph2vec. ShadeWatcher [250] is a recommendation-guided intrusion detector using provenance graphs. It borrows the recommendation concepts of user-item interactions into security concepts of system entity interactions and analyzes cyber threats in an automated and adaptive manner. threaTrace [219] emerges as an online approach dedicated to detecting host-based threats at the node level. Its GNN model is a tailored GraphSAGE [73] for learning rich contextual information in provenance graphs. MAGIC [99] leverages Graph Attention Network (GAT) [213] as its graph representation module. MAGIC employs masked graph representation learning to incorporate the capability of pretraining. It can adapt to concept drift with minimal computational overhead, making it applicable to real-world online APT detection. Flash [182] is a comprehensive and scalable approach on data provenance graphs to overcome the limitations in accuracy, practicality, and scalability. Flash incorporates a novel adaptation of a GNN-based contextual encoder to encode both local and global graph structures into node embeddings efficiently. R-caid [65] first incorporates root cause analysis into PIDS. Before training GNNs, R-caid links nodes to their root causes to build a new graph, intending to prevent it from mimicry and evasion attacks. Argus [230] finds the performance of the prior IDS is questionable on large scale. It thus devises a form of discrete temporal graph and uses encoder-decoder unsupervised learning to detect different types of attacks. TAPAS [252] leverages a stacked LSTM-GRU model and a task-guided segmentation algorithm to reduce the spatiotemporal dimensions of APT detection, achieving efficient, low-cost, and accurate detection. In addition to the aforementioned detectors, recent researchers have developed numerous useful tools for better understanding audit logs, such as data visualization analysis tool [133] and counterfactual-driven attack explanation generator [223].", + "bbox": [ + 86, + 350, + 909, + 766 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5.2.2 Application Log-based Detectors. Application logs are generated from the installed binaries. Generally, application logs are in the form of natural language text, namely sequence data. It is thus common to introduce sequence-based DL techniques into application log-based DL-IDS.", + "bbox": [ + 86, + 775, + 909, + 825 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Traditional Learning. For traditional learning, Wei et al. [231] propose a general methodology to mine rich semantic information in console logs to detect large-scale system problems. Bodik et al. [19] leverage a logistic regression model on a new and efficient representation of a datacenter's state called fingerprint to detect previously seen performance crises in that datacenter. AMOD [43] uses the SVM HYBRID strategy to filter query annotations from web request logs and then", + "bbox": [ + 86, + 831, + 909, + 916 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "1:14", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 13 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "update the stacked generalization detection model to efficiently detect web code injection attacks and obtain malicious queries to update the web application firewall (WAF) library.", + "bbox": [ + 86, + 118, + 905, + 151 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Sequence Neural Network. Due to the similarity between application logs and natural language texts, sequence neural networks such as Recurrent Neural Network [86] and Transformer [39, 212] are widely employed. DeepLog [45] employs LSTM to model system logs as natural language sequences. It is able to automatically learn benign log patterns and detect anomalies when there is a deviation between log patterns and the trained model. LogRobust [257] finds previous methods do not work well under the close-world assumption and utilizes an attention-based LSTM model to handle unstable log events and sequences. LogAnomaly [151] identifies previous studies tend to cause false alarms by using indexes rather than semantics of log templates. Empowered by a novel, simple yet effective method termed template2vec, LogAnomaly is proven to successfully detect both sequential and quantitative log anomalies simultaneously. LogC [246] is a new log-based anomaly detection approach with component-aware analysis. It feeds both log template sequences and component sequences to train a combined LSTM model for detecting anomalous logs. NeuralLog [113] targets the performance caused by log parsing errors such as out-of-vocabulary words and semantic misunderstandings and employ BERT to perform neural representation. PLELog [238] is a semi-supervised anomaly detection approach that can get rid of time-consuming manual labeling and incorporate the knowledge on historical anomalies. SpikeLog [175] adopts a weakly supervised approach to train an anomaly score model, with the objective of handling a more reasonable premise scenario where a large number of logs are unlabeled. LogCraft [254] is an end-to-end unsupervised log anomaly detection framework based on automated machine learning, which mitigates the cost of understanding datasets and makes multiple attempts for building algorithms. Tweezers [33] uses a large language model to identify entities and build a relationship graph, and generates embeddings through graph attention network optimization to achieve security incident detection. LogSer [23] parses logs by preprocessing parameters, splitting logs, tree parsing, and template merging. It then inputs relevant embeddings into BERT training to detect anomalies, generate reports, and perform incremental updates. LogDLR [265] uses SBERT embeddings and a Transformer autoencoder with domain adversarial training to learn domain-invariant features, detecting anomalies via reconstruction error.", + "bbox": [ + 86, + 161, + 909, + 611 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "5.2.3 Network Traffic-based Detectors. Network traffic comes from communications between hosts across a computer network. It is ruled by network protocols such as Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) and can be utilized for intrusion detection. Basically, network traffic-based detectors are termed NIDS.", + "bbox": [ + 86, + 620, + 909, + 686 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Traditional Learning. Given the fact that network traffic is usually encrypted for secure communications, feature engineering-guided machine learning is widely applied in NIDS. NetPro [121] employs traceability reasoning with Merkle Hash Trees and digital signatures to detect direct and indirect MANET routing attacks while preserving node privacy, and outputs a traceability graph to identify malicious nodes and behaviors. CATH [72] is a catastrophe-theory-based approach for DoS detection in software-defined networks (SDNs), which leverages the selection, normalization, and fusion of statistical flow attributes to model network states. Whisper [56] pays attention to both high accuracy and high throughput by utilizing frequency domain features. SigML++ [211] is an extension of SigML for supervised anomaly detection approach. SigML++ employs Fully Homomorphic Encryption and Artificial Neural Network (ANN) for detection, resulting in execution without decrypting the logs. OADSD [253] achieves task independently and has the ability of adapting to the environment over SD-WAN by using On-demand Evolving Isolation Forest. LtRFT [204] innovatively introduces Learning-To-Rank scheme for mitigating the low-rate DDoS", + "bbox": [ + 86, + 698, + 911, + 915 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "1:15", + "bbox": [ + 876, + 83, + 907, + 95 + ], + "page_idx": 14 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "attacks targeted at flow tables. AGC [225] maps the original data into the embedding space through embedding learning to obtain more representative anchor points, thus achieving fine-grained classification of low-quality label data.", + "bbox": [ + 90, + 116, + 905, + 166 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Graph and Sequence Neural Network. In network traffic, packets consist of various contents and their flows can be represented as graphs. As a result, both graph neural network and sequence neural network are adopted in NIDS. Kitsune [159] is a plug and play NIDS that is allowed to detect attacks efficiently on the local network without supervision. It alleviates the problem that network gateways and router devices simply do not have the memory or processing power. MT-FlowFormer [260] is a semi-supervised framework to mitigate the lack of a mechanism for modeling correlations between flows and the requirement of a large volume of manually labeled data. $\\mathrm{I}^2\\mathrm{RNN}$ [199] is an incremental and interpretable RNN for encrypted traffic classification, which can be efficiently adapted for incremental traffic types. ERNN [262] represents error-resilient RNN, which is a robust and end-to-end RNN model specially designed against network-induced phenomena. Euler [108] accelerates the most memory-intensive part, message-passing stage within GNN, with several concurrently-executed replicated GNNs. pVoxel [58] is an unsupervised method that proposes to leverage point cloud analysis to reduce false positives for the previous NIDS such as Whisper and Kitsune without requiring any prior knowledge on the alarms. NetVigil [91] is specially designed for east-west traffic within data center networks. It utilizes E-GraphSage and contrastive learning techniques to strengthen its resilience. Exosphere [57] detects flooding attacks by analyzing packet length patterns, without investigating any information in encrypted packets. DFNet [263] is a DDoS prevention paradigm denoted by preference-driven and in-network enforced shaping. RFH-HELAD [264] consists of a $K$ classification model based on a deep neural network and a $K + 1$ classification combining GAN and Deep kNN for detecting anomalies in network traffic. ReTrial [259] employs an improved graph attention network with Bayesian and EM algorithms to iteratively correct misleading links, enabling robust detection of encrypted malicious traffic. HEN [221] uses SMOTE to enhance data, trains LightGBM, generates explanations via SHAP, trains AE-LSTM to reconstruct SHAP values, sets a threshold from training errors, and marks test traffic with excess errors as attacks for intrusion detection. TCG-IDS [222] is the first self-supervised temporal contrastive GNN for network intrusion detection, capturing spatiotemporal traffic dependencies with high accuracy and low false alarms. A-NIDS [251] uses a shallow fully connected network for real-time detection and a Stacked CTGAN generator to address catastrophic forgetting and old data storage costs. GTAE-IDS [62] uses a graph autoencoder with a Transformer encoder and DNN decoder to learn benign traffic, enabling label-free, near-real-time intrusion detection and new attack identification.", + "bbox": [ + 90, + 173, + 905, + 675 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "5.2.4 Hybrid Log-based Detectors. Based on the above discussions, a natural idea is to combine various types of log data for improving detection capability. OWAD [75] is a general framework to detect, explain, and adapt to normality shifts in practice. OWAD is validated to be effective in various detection granularity, covering provenance graphs, application logs, and network packets. FG-CIBGC [165] mines syncretic semantics in multi-source logs including audit logs, application logs, and network traffic using LLM under in-context learning, which generates behavior graphs for comprehensive analysis.", + "bbox": [ + 90, + 681, + 905, + 798 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "5.3 Attack Investigation", + "text_level": 1, + "bbox": [ + 90, + 811, + 327, + 827 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Except for identifying individual intrusive nodes, IDS are supposed to detect the full story of intrusions (a.k.a., attack scenario graphs). This process is referred to as attack investigation, which can be done by directly detecting attack scenario graphs [216], or analyzing the causal relations between compromised nodes progressively to construct attack scenario graphs [9, 41, 100, 232]. The attack scenario graphs are defined with scenario graphs as follows:", + "bbox": [ + 90, + 831, + 905, + 915 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "1:16", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 15 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 90, + 934, + 512, + 945 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/56e153ae6819bf89b12e886fd61914b1384a3f85184e624d1b7af714ffa21642.jpg", + "table_caption": [ + "Table 5. Overview of attack investigation approaches." + ], + "table_footnote": [], + "table_body": "
TaxonomyApproachRelease TimeAudit LogApplication LogBase ModelStarting NodeInvestigation Granularity
Traditional LearningProvDetector [216]2020doc2vecPath
BehaviorBaseline [269]2025FastTextPath
Sequence Neural NetworkATLAS [9]2021LSTMGraph
LogTracer [166]2022DeepLogPath
ConLBS [118]2023TransformerGraph
AirTag [41]2023BERTGraph
Graph Neural NetworkLiu et al. [134]2022struc2vecGraph
Karios [29]2023GNNGraph
TREC [139]2024GNNGraph
Orthrus [100]2025UniMPPath
Slot [176]2025GNNGraph
FeCoGraph [146]2025GCNGraph
", + "bbox": [ + 94, + 145, + 903, + 359 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Definition 5.2. (Scenario Graph). Scenario graph is a subgraph of its given provenance graph, which is constructed by the nodes and edges causally dependent on nodes of interest.", + "bbox": [ + 86, + 381, + 907, + 416 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Definition 5.3. (Attack Scenario Graph). Attack scenario graph is a scenario graph where its nodes of interest are compromised nodes.", + "bbox": [ + 86, + 425, + 907, + 458 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "In the past, attack investigation is conducted by forward analysis and backward analysis [88]. Forward analysis discovers the influence that nodes of interest will cause and backward analysis traces back how nodes of interest are generated. Benefiting from DL techniques, both forward and backward analysis can be achieved by learning patterns of attack scenario graphs. Furthermore, visual analytics techniques have been widely used to assist security analysts in understanding the causal chain of intrusions [256, 261]. Table 5 summarizes the overview of attack investigation approaches. Similar to Section 5.2, we exclude papers [6, 52, 60, 80, 88, 98, 111, 120, 142, 157, 218, 239, 268] slightly relevant to DL for conciseness.", + "bbox": [ + 86, + 465, + 909, + 600 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Traditional Learning. Unlike detecting intrusive nodes, attack scenario graphs are complicated and thus are hard to handle by traditional learning methods. ProvDetector [216] utilizes doc2vec to learn the embedding representation of paths in the provenance graph. Then a density-based detection is deployed to detect abnormal causal paths in the provenance graph. BehaviorBaseline [269] presents a novel learning-based anomaly detection method for large-scale provenance graphs. It incorporates dynamic graph processing with adaptive encoding and a tag-propagation framework for real-time detection.", + "bbox": [ + 86, + 606, + 909, + 723 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Sequence Neural Network. Log data is in the form of natural language text or is allowed to be transformed into sequences of events, which facilitates the introduction of sequence neural networks. ATLAS [9] is a framework to construct end-to-end attack stories from readily available audit logs, which employs a novel combination of causal analysis and natural language processing. ATLAS exploits LSTM to automatically learn the pattern difference between attack and nonattack sequences. LogTracer [166] is an efficient anomaly tracing framework that combines data provenance and system log detection together. An outlier function with an abnormal decay rate is introduced to improve the accuracy. ConLBS [118] combines a contrastive learning framework and multilayer Transformer network for behavior sequence classification. AirTag [41] employs unsupervised learning to train BERT directly from log texts rather than relying on provenance graphs. AirTag constructs attack scenario graphs by integrating the detected victim nodes.", + "bbox": [ + 86, + 731, + 909, + 916 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "1:17", + "bbox": [ + 876, + 83, + 907, + 94 + ], + "page_idx": 16 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Graph Neural Network. To capture causal relations within graphs, GNN is commonly adopted. Liu et al. [134] propose an automated attack detection and investigation method via learning the context semantics of the provenance graph. The provenance graph analyzed by struc2vec captures temporal and causal dependencies of system events. Kairos [29] is a practical intrusion detection and investigation tool based on whole-system provenance. Kairos utilizes GNN to analyze system execution history, so that detects and reconstructs complex APTs. It employs a GNN-based encoder-decoder architecture to learn the temporal evolution of provenance graph structure changes and quantify the abnormal degree of each system event. TREC [139] abstracts APT attack investigation problem as a tactics / techniques recognition problem. TREC trains its model in a few-shot learning manner by adopting a Siamese neural network. Orthurus [100] identifies Quality of Attribution as the key factor contributing to whether or not the industry adopts IDS. It first detects malicious hosts using a GNN encoder and then reconstructs the attack path through dependency analysis. Slot [176], based on provenance graphs and graph reinforcement learning, uncovers hidden relationships among system behaviors, dynamically adapts to new activities and attack strategies, resists adversarial attacks, and automatically constructs attack chains. FeCoGraph [146] directly processes traffic embedding through line graphs to adapt to various GNNs, covering more attack scenarios while protecting data privacy.", + "bbox": [ + 86, + 118, + 909, + 402 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "6 BENCHMARK DATASETS", + "text_level": 1, + "bbox": [ + 86, + 416, + 366, + 430 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "DL-IDS relies on high-quality data to train an effective model. This section introduces the dimensions of datasets (Section 6.1) and some public datasets widely used in DL-IDS (Section 6.2).", + "bbox": [ + 86, + 437, + 907, + 470 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "6.1 Dimensions of Datasets", + "text_level": 1, + "bbox": [ + 86, + 484, + 360, + 500 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "To illustrate the quality of DL-IDS datasets, it is general to use the following dimensions:", + "bbox": [ + 86, + 506, + 839, + 523 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Benign Scenarios: Benign data should cover benign behaviors and system activities to the greatest extent, enabling DL-IDS to learn patterns of benign behaviors to differentiate malicious behaviors.", + "- Malicious Scenarios: Malicious data ought to incorporate typical attack scenarios while taking into account the diversity of attacks, including short-term and long-term attacks, as well as simple attacks and multi-stage attacks.", + "- Ground-truth Labels: Data should be labeled as benign or malicious. For multi-stage attacks, it is useful to indicate the attack type or the attack stage it belongs to.", + "- Data Granularities: Datasets can be in the form of different granularities. The most accepted one is to provide raw log data. Due to copyright concerns, some replicates [41, 99] merely provide post-processed log data without their processing source codes.", + "- Operating Systems: The operating system determines the generalizability of the dataset. The more operating systems a dataset covers and the more common they are, the more comprehensively it can evaluate PIDS performance." + ], + "bbox": [ + 119, + 529, + 905, + 759 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "6.2 Public Datasets", + "text_level": 1, + "bbox": [ + 86, + 777, + 283, + 791 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Publicly available datasets bring a lot of convenience to research on DL-IDS. However, some researchers use self-made datasets that are not publicly available, making it difficult for other researchers to reuse their datasets [46]. To address this issue, we collect and organize some open-source datasets for further studies, which are listed in Table 6.", + "bbox": [ + 86, + 797, + 907, + 862 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "LANL Dataset [103] is collected within the internal computer network of Los Alamos National Laboratory's corporate. The dataset consists of 58 consecutive days of de-identified data, covering about 165 million events from 12 thousand users. To obtain, its data sources include Windows-based", + "bbox": [ + 86, + 863, + 907, + 913 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "1:18", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 17 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/fe18d2ce14f1f4df61f7c7755a7441ee7d26f7ed02d5dc381f022683391478c8.jpg", + "table_caption": [ + "Table 6. Overview of public datasets. W, L, F, A, M, and S represent the operating system of Windows, Linux, FreeBSD, Android, Mac, and supercomputer, respectively." + ], + "table_footnote": [], + "table_body": "
DatasetReleaseSizeScenariosLabelFormatSystem
LANL Dataset [103]201512 GB-Yes.txtW
StreamSpot [145]20162 GB1Yes.tsvL
AWSCTD [22]201839 GB-NoSQLiteW
DARPA TC E3 [38]2018366 GB [67]6NoCDMW, L, F, A
DARPA TC E5 [38]20192,699 GB [67]8NoCDMW, L, F, A
DARPA OpTC [37]20201,100 GB [13]-NoeCARW
Unicorn SC [76]2020147 GB2YesCDML
CERT Dataset [63, 131]202087 GB-Yes.csvW
LogChunks [20]202024.1 MB-Yes.txt-
Loghub [266]202077 GB--.txtW, L, M, S
ATLAS [9]20210.5 GB10Yes.txtW
ATLASv2 [184]20231210Yes.txtW
ProvSec [197]2023-11Yes.jsonL
AutoLabel [173]2025136 GB29Yes.jsonL
", + "bbox": [ + 156, + 161, + 839, + 387 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "authentication events, process start and stop events, DNS lookups, network flows, and a set of well-defined red teaming events.", + "bbox": [ + 90, + 431, + 907, + 463 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "StreamSpot dataset [145] is made up of 1 attack and 5 benign scenarios. The attack scenario exploits a Flash vulnerability and gains root access to the visiting host by visiting a malicious drive-by download URL. The benign scenarios are relevant to normal browsing activity, specifically watching YouTube, browsing news pages, checking Gmail, downloading files, and playing a video game. All the scenarios are simulated through 100 automated tasks with the Selenium RC [208].", + "bbox": [ + 90, + 463, + 907, + 547 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "DARPA TC datasets [38] are sourced from the DARPA Transparent Computing (TC) program, identified by the number of engagements from E1 to E5. Among them, DARPA TC E3 is the most widely used. The TC program aims to make current computing systems transparent by providing high-fidelity visibility during system operations across all layers of software abstraction. Unfortunately, DARPA TC datasets are released without labels, and DARPA makes no warranties as to the correctness, accuracy, or usefulness of the datasets.", + "bbox": [ + 90, + 547, + 907, + 645 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "DARPA Operationally Transparent Cyber (OpTC) [37] is a technology transition pilot study funded under Boston Fusion Corporate. The OpTC system architecture is based on the one used in TC program evaluation. In OpTC, every Windows 10 endpoint is equipped with an endpoint sensor that monitors post events, packs them into JSON records, and sends them to Kafka. A translation server aggregates the data into eCAR format and pushes them back to Kafka. OpTC scales TC components from 2 to 1,000 hosts. The dataset consists of approximately 1 TB of compressed JSON data in a highly instrumented environment over two weeks.", + "bbox": [ + 90, + 645, + 907, + 762 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Unicorn SC [76] is a dataset specifically designed for APT detection, proposed by Han et al., authors of the Unicorn model. The dataset includes two supply chain scenarios, wget and shell shock, where each scenario lasts for 3 days to simulate the long-term feature of APT attacks, resulting in provenance data containing 125 benign behaviors and 25 malicious behaviors. The data is saved in the form of provenance graphs, describing the causal relationships during the system execution process.", + "bbox": [ + 90, + 762, + 907, + 862 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "CERT Dataset [131] is a collection of synthetic insider threat test datasets that provide both background and malicious actor synthetic data. It is developed by the CERT Division, in collaboration with ExactData, LLC, and under sponsorship from DARPA I2O. CERT dataset learned", + "bbox": [ + 90, + 862, + 907, + 911 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 92, + 84, + 502, + 95 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "1:19", + "bbox": [ + 878, + 84, + 905, + 93 + ], + "page_idx": 18 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 483, + 934, + 903, + 945 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "important lessons about the benefits and limitations of synthetic data in the cybersecurity domain and carefully discussed models of realism for synthetic data.", + "bbox": [ + 90, + 116, + 903, + 150 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "LogChunks [20] is an application log dataset for build log analysis, containing 797 annotated Travis CI build logs from 80 GitHub repositories and 29 programming languages. These logs are from mature and popular projects, collected through repository, build, and log sampling. Each log in the dataset has manually labeled text blocks of build failure reasons, search keywords, and structural categories, and cross-validated with the original developers with an accuracy of $94.4\\%$ .", + "bbox": [ + 90, + 151, + 905, + 233 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Loghub dataset [266] is a large collection of system log datasets, providing 19 real-world log data from various software systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. The objective of Loghub is to fill the significant gap between intelligent automated log analysis techniques and successful deployments in the industry. For the usage scenarios of Loghub, about $35\\%$ are anomaly detection, $13\\%$ are log analysis, and $8\\%$ are security.", + "bbox": [ + 90, + 234, + 905, + 333 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "ATLAS dataset [9] implements 10 attacks based on their detailed reports on real-world APT campaigns and generates audit logs in a controlled testbed environment. Among the ten attacks, four are from single host and the rest six are from multiple hosts. All attacks were developed and executed on Windows 7 32-bit virtual machines and took an hour to complete, along with a 24-hour-window audit logs for benign system behaviors.", + "bbox": [ + 90, + 334, + 905, + 416 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "ATLASv2 dataset [184] enriches the ATLAS dataset with higher quality background noise and additional logging vantage points. In this dataset, two researchers use the victim machines as their primary work stations throughout the course of engagement, instead of depending on automated scripts to generate activity. System logging, in contrast, cover a five-day period, where the first four days simulate normal work days and the fifth day begins with benign activity then transitions into execution of the corresponding attack.", + "bbox": [ + 90, + 416, + 905, + 516 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "ProvSec dataset [197] is created for system provenance forensic analysis. To fulfill data provenance requirements, ProvSec includes the full details of system calls including system parameters. In ProvSec, 11 realistic attack scenarios with real software vulnerabilities and exploits are used and an algorithm to improve the data quality in the system provenance forensics analysis is presented.", + "bbox": [ + 90, + 516, + 905, + 583 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "AutoLabel dataset [173] automates fine-grained log labeling by reducing the labeling problem to obtaining an accurate attack subgraph in a provenance graph. Its experiments consist of 29 scenarios, including 25 real CVE vulnerabilities across 12 widely-used applications (spanning 5 programming languages) plus a Sandworm threat simulation by MITRE CTID.", + "bbox": [ + 90, + 583, + 905, + 650 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "7 CHALLENGES AND FUTURE DIRECTIONS", + "text_level": 1, + "bbox": [ + 90, + 663, + 526, + 679 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "After the detailed introduction to the data management stage and the intrusion detection stage, as well as the widely-used benchmark datasets, this section further discusses challenges encountered in existing DL-IDS and summarizes the corresponding visions. These include fundamental resources (Section 7.1), pre-trained large models (Section 7.2), and comprehensive applications (Section 7.3).", + "bbox": [ + 90, + 684, + 905, + 751 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "7.1 Fundamental Resources", + "text_level": 1, + "bbox": [ + 90, + 766, + 362, + 781 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Effective DL-IDS heavily depends on core fundamental resources such as datasets and computing facilities to develop [105]. Here, we will discuss their challenges one after the other.", + "bbox": [ + 90, + 787, + 905, + 819 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "7.1.1 Poor Data Quality. Existing datasets for DL-IDS may contain errors, inaccuracies, or missing values. This leads to unreliable descriptions of system behaviors that may mislead DL-IDS. For example, in some cases of the DARPA TC dataset, the PROCESS object and its source fail to properly resolve conflicts, resulting in possible incorrect transformation. Besides, the acuity_level value of the FLOW object is 0, while the value range for this field in other objects is from 1 to 5. Another", + "bbox": [ + 90, + 831, + 905, + 913 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "1:20", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 19 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 90, + 933, + 512, + 945 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "example could be the LogChunks [20] dataset. In this dataset, the content describing the failure reasons is possibly incomplete. This is because a chunk in LogChunks only contains a continuous substring of the log text and a failure reason may be described across multiple sections of the log. Moreover, LogChunks neglects the classification of failure reasons like test, compilation, and code inspection errors, which hinders further research from analyzing failure reasons.", + "bbox": [ + 86, + 116, + 907, + 200 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Meanwhile, high-quality ground-truth labels are hard to acquire, which is impeded by the contradiction between fine-grained manual labeling and automated label generation. On one hand, for unknown intrusions such as zero-day attacks, it is very labor-intensive for security analysts to correspond each attack scenario to certain log entries, although coarse-grained attack scenarios may have been acquired. The DAPRA TC dataset [38] is a typical example for this. It only provides a ground truth report for attack scenarios, which does not correspond to any specific log entries. Although a few researchers [219] provide the third-party ground-truth labels that are manually identified by themselves, we empirically find some ambiguities between their ground-truth labels and the official attack scenario report. These ambiguities have an obviously negative effect on DL-IDS, and to some extent, they may even cause the accumulation of errors. On the other hand, the development of automated labeling tools is in an awkward position. The log data is generated based on its given prior knowledge of intrusions [28], whereas the challenge of DL-IDS is to detect zero-day intrusions. This tends the development of such automated tools to be somewhat pointless.", + "bbox": [ + 86, + 201, + 909, + 416 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "In addition, there are no unified and effective evaluation metrics for DL-IDS [29], which further weakens the potential of datasets. For example, precision, recall, F1 score are usually exploited in most studies [9, 99, 182, 216], while some papers [41] propose to use True Positive Rate (TPR) and False Positive Rate (FPR) as evaluation metrics. This makes the comparison experiments usually unfair and hard to tell if the validation is convincing. We also note that in many cases where the percentage of negatives (or malicious log entries) is low, sacrificing FPR can always significantly increase TPR. For example, sacrificing 1,000 false positives for one true positive might only increase FPR by $0.05\\%$ , but would increase TPR by $5\\%$ .", + "bbox": [ + 86, + 416, + 909, + 551 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "7.1.2 Insufficient Amount of Data. Although log data is generated very quickly (e.g., eBay generates 1.2 PB log data per day by 2018 [189]), DL-IDS is still facing challenges in insufficient amounts of data. Discounting the above data quality issues such as inaccuracies, the reasons are three-fold:", + "bbox": [ + 86, + 563, + 907, + 612 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "First, log data has an extremely large number of trivial events, which are proven ineffective and usually removed by graph summarization [237, 250]. For example, data provenance provides fine-grained information about memory-related events, such as data-to-memory mapping and protection of certain memory addresses. These memory-related events basically do not involve attacks, and unfortunately, are always orthogonal to the existing DL-IDS. However, to ensure the completeness requirement of data provenance and to capture very infrequent but inevitable memory attacks, these memory-related events are still recorded in benchmark datasets. As a result, the usable part of each dataset is rather small for DL-IDS, which can be reflected by the high summarization ratio achieved by graph summarization approaches (e.g., $70\\%$ [234]).", + "bbox": [ + 86, + 613, + 909, + 762 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "The second reason for an insufficient amount of data is the limited dataset representativeness. As observed in Table 6, most of the datasets have no more than 10 attack scenarios, not to mention that each of these attack scenarios has been carefully chosen by their authors. This limited number of attack scenarios suggests that existing datasets are almost impossible to represent the diversified attack methods, as the number of CVE records has already been over 280,000 [31]. Furthermore, the existing datasets such as DAPRA TC E3 [38] are collected in a specific experimental environment and may not cover other types of normal system behaviors, and are proven that a significant amount of synthetic data exists [133]. DARPA TC E5 [38] is unusable for most experiments due to the sparse and error-filled documentation. Unicorn SC [76] is generated by an idealized simulation", + "bbox": [ + 86, + 763, + 909, + 912 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "1:21", + "bbox": [ + 876, + 84, + 905, + 95 + ], + "page_idx": 20 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "of supply chain scenarios, which means many real-world features are prone to be ignored in this dataset. Hence, training DL-IDS on these non-representative datasets could be a disaster for the computer systems that they protect.", + "bbox": [ + 86, + 118, + 905, + 166 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Finally, the accessibility of datasets further exacerbates the insufficient data problem. Due to privacy and copyright issues, some datasets may be proprietary or difficult to obtain [216, 218]. Moreover, ProvDetector [216] conducted a three-month system evaluation in an enterprise environment with 306 hosts and collected benign provenance data of 23 target programs. Yet this dataset has not been made public, rendering it unavailable to improve other DL-IDS and almost all the assessment settings related to ProvDetector are susceptible to inequity.", + "bbox": [ + 86, + 169, + 907, + 268 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "7.1.3 Potential Heavy Computation Requirements. Similar to other DL techniques, DL-IDS also requires a potentially large amount of computing resources to improve their performance. According to [185], the generalizability of neural models is proportional to the investment of computing resources. Supposing that the challenge of insufficient data is mitigated and a large volume of log data is available, more computing resources are inevitably required. Besides, we will illustrate in Section 7.2 that there are plenty of powerful techniques that have not been introduced in DL-IDS, which will also bring in computation requirements. Unfortunately, acceleration methods like parallel computation and efficient retrieval have not been fully scheduled by the cybersecurity community. An example is that the computation time of Unicorn equipped with one core is proven linear to its workloads [76]. It is clear that the efficiency of Unicorn, which is not implemented in parallel, will reach the bottleneck as this core does.", + "7.1.4 Future Directions. To conclude, the challenges for DL-IDS in fundamental resources consist of data quality, data volume, and computational overhead. Apart from unintentional errors and nontechnical issues in fundamental resources, the research questions that urgently need to be addressed include the contradiction between unaffordable manual labeling and non-generalizable auto-labeling techniques, non-unified benchmark datasets and evaluation metrics, as well as potential heavy computational overheads. Therefore, we summarize the future directions as follows:" + ], + "bbox": [ + 86, + 276, + 911, + 569 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Future Directions", + "text_level": 1, + "bbox": [ + 158, + 580, + 325, + 594 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Developing efficient man-machine interactive log labeling mechanisms and organizing open-source data-sharing platforms accordingly to provide large amounts of high-quality datasets.", + "- Maintaining effective and comprehensive benchmark datasets, accompanied by a unified performance metric framework for a fair comparison.", + "- Investigating parallel or simplified strategies for DL-IDS, and studying their integration with log storage systems to achieve end-to-end acceleration." + ], + "bbox": [ + 158, + 605, + 837, + 720 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "7.2 Pre-training Theories and Techniques", + "text_level": 1, + "bbox": [ + 88, + 745, + 493, + 762 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "In recent years, significant progress has been made by Large Language Models (LLMs) in the field of DL. Their capacity to understand and generate dialogue has been greatly enhanced as the model parameters of LLMs keep rising. T5 [179], BERT [39], GPT [178], GPT-4 [2], LaMDA [207], and LLaMA [209] are notable examples.", + "bbox": [ + 86, + 765, + 907, + 831 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "With the development of pre-training techniques, LLMs have been adopted in many fields such as finance [258], education [164], medicine [172], and even other domains of cybersecurity [34, 69, 92]. In contrast, the adoption of LLMs in DL-IDS is stagnant, as shown in Figure 6. We can observe that LLMs developed at full speed beginning in 2019. Their prosperity, however, has not extended to DL-IDS. Until now, the only two DL-IDS that incorporate pre-training techniques, AirTag [41] and", + "bbox": [ + 86, + 831, + 907, + 915 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "1:22", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 21 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 21 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 947 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg", + "image_caption": [ + "Fig. 6. Interactions between DL models and DL-IDS. While DL models proposed before 2019 have already leveraged in DL-IDS, the emerging LLMs (or pre-training theories and the techniques) since 2020 remains underdeveloped in this domain." + ], + "image_footnote": [], + "bbox": [ + 100, + 125, + 903, + 348 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "MAGIC [99], still do not make full use of the potential of LLMs. AirTag pre-trains a BERT model on application logs and detects intrusions in terms of embeddings generated by BERT. MAGIC introduces GraphMAE [90], a model architecture derived from Graph Autoencoder [109] in 2016 but integrated with the famous masked self-supervised learning method [81] in 2022, to conduct self-supervised learning on provenance graphs. MAGIC further designs an adapter to apply the pre-trained model in different detection scenarios. Nevertheless, both AirTag and MAGIC can be regarded as preliminary explorations of pre-training techniques. According to the scaling law [102], the performance of LLMs will steadily improve, as the parameters, data, and computation increase. And the reasoning ability of LLMs will suddenly emerge [220], allowing them to chat with humans smoothly. Such advantageous abilities obviously have not been incorporated into DL-IDS.", + "bbox": [ + 86, + 448, + 907, + 612 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Nowadays, some researchers [7, 59, 125, 160] have started to explore the applications of LLMs on DL-IDS. Yet the theories and techniques of such combination remain challenges. In the following, we will illustrate the identified issues and then point out the future directions.", + "bbox": [ + 86, + 615, + 909, + 663 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "7.2.1 Trade-off between Reliability and Generalizability. The governing concern for the employment of LLMs in DL-IDS is reliability (or explainability). Although offering generalizability, LLMs have long been denounced to have issues with hallucinations [149, 241], privacy [84, 240, 244], overreliance [107], and backdoor threats [136]. These unexplainable and uncontrollable features are an absolute disaster for DL-IDS. For example, when feeding log data to LLMs, they sometimes are prone to hallucinate and provide wrong detection results. Attacks thus successfully bypass the detection facilities and can exfiltrate sensitive data in the victim computer systems. Another example for this is that sensitive information may leak from LLMs. Hui et al. [93] present a prompt leakage attack for LLMs, which is demonstrated to be effective in both offline settings and real-world LLM applications.", + "7.2.2 Short of Statistical Log Modeling. LLMs are developed on the basis of statistical language modeling [101, 187], which is not insufficiently studied for log data. The statistical modeling of natural language can be traced back to the early 1950s when Shannon pioneered the technique of predicting the next element of natural language text [195] and discussed the n-gram model for" + ], + "bbox": [ + 86, + 673, + 911, + 915 + ], + "page_idx": 22 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "1:23", + "bbox": [ + 876, + 84, + 907, + 94 + ], + "page_idx": 22 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 22 + }, + { + "type": "table", + "img_path": "images/56aa40c700210b6d12b351836313889a9aa1ee9637de6412e6be03e25f4a6f0e.jpg", + "table_caption": [ + "Table 7. Comparison of research advances in statistical modeling of various data. \"NL\", \"PL\" and \"FL\" represent Natural Language, Programming Language, and Formal Language, respectively. Note that PL is a type of FL." + ], + "table_footnote": [], + "table_body": "
DataFormContent Generation RulesStatistical Modeling StudiesPre-training
TextNLGrammar, pragmatics, semantics, etc[101, 148, 187, 196]well-done
SpeechNLText rules (see above) and phonetics[104, 167]well-done
Source codePLLexical and syntactic definitions[8, 85, 180]well-done
LogNL + FLLog template defined by developersfuture workunderdeveloped
", + "bbox": [ + 106, + 161, + 886, + 256 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "English [196]. After that, as machine learning came into view of the NLP research communities, language modeling flourished, and many models such as TreeBank [148], word2vec [154, 155] and LSTM [86] were proposed. Over decades, researchers in NLP have gained solid knowledge of language modeling, whose interests gradually shifted to efficiency. An epoch-making model, Transformer [212], was presented using the multi-head self-attention mechanism to fulfill parallel computing, which was widely exploited in popular pre-trained models such as BERT [39] and GPT [2] afterward. It is evident that the success of LLMs comes from the prolonged studies on statistical language modeling.", + "bbox": [ + 90, + 312, + 907, + 444 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Unfortunately, there are almost no research efforts on statistical modeling of log data, resulting in pre-training techniques of DL-IDS remaining underdeveloped. By contrast, as illustrated in Table 7, the statistical modeling studies of other types of data have already started. Hindle et al. [85] demonstrate that the source code is very repetitive and predictable, and, in fact, even more so than natural language. Driven by such statistical modeling conclusion, DL-based source code applications [54, 70, 124, 126, 203, 233, 235] such as code generation and code clone detection flourish, many of which have already becomes common applications in LLMs. Similar cases can be found for speech data, whose applications are like text to speech [71, 169, 183] and speech recognition [14].", + "bbox": [ + 90, + 445, + 907, + 594 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "We argue that log data is also created by humans, similar to text, speech, and source code. It is generated according to developer-defined log templates, with a form of both natural language (e.g., application logs) and formal language (e.g., data provenance in CDM format). Given the fact that natural language (e.g., text and speech) and formal language (e.g., source code) both exhibit positive performance in pre-training, log data urgently demands statistical modeling achievements to facilitate its pre-training research. Although several works [96, 152] have discussed the features of log data, they are orthogonal to the explainable combination of DL and IDS. Compared with the other data types, challenges in statistical log modeling, for instance, may lie in that logs are extremely long and detailed for reliable purposes. It is very common that the length of one single log entry is the same as that of one paragraph in natural language texts. These challenges happen to be the shortcomings of LLMs - not being able to handle long text and not being trustworthy in generated contents.", + "bbox": [ + 90, + 595, + 907, + 793 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "7.2.3 Future Directions. According to the scaling laws [102] and emergent abilities theory [220], as the model size continues to grow, the performance of DL-IDS will increase simultaneously. Thus, increasing the amount of model parameters will be an inevitable trend for DL-IDS. The underlying research questions include the strategies for incorporating existing LLMs in intrusion detection, since it is infeasible to directly leverage unreliable LLMs to detect intrusions, and the theories and techniques for modeling long and detailed log data. We summarize the future directions as follows:", + "bbox": [ + 90, + 809, + 907, + 909 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "1:24", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 23 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 23 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 90, + 934, + 512, + 945 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Future Directions", + "text_level": 1, + "bbox": [ + 158, + 119, + 325, + 131 + ], + "page_idx": 24 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Investigating how and where to introduce LLMs into DL-IDS like [165], with the objective of balancing the generalizability provided by LLMs and the reliability required by DL-IDS.", + "- Exploring fundamental statistical modeling theories for log data. On this basis, designing pre-training frameworks for log data and its downstream tasks such as steps within the workflow of DL-IDS (see Section 3.2)." + ], + "bbox": [ + 158, + 143, + 841, + 244 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "7.3 Comprehensive Applications and Scenarios", + "text_level": 1, + "bbox": [ + 88, + 265, + 547, + 281 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "DL-IDS possess abilities that the traditional IDS lack, or are difficult to realize, such as generalizability for zero-day attacks and modeling ability for complicated downstream tasks. We will elaborate on the possible new-style applications and discuss the challenges in and introduced by them.", + "bbox": [ + 86, + 286, + 909, + 336 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "7.3.1 Limited Forward and Backward Tracing Scope. Forward tracing and backward tracing are employed in attack investigation, as illustrated in Section 5.3. Under traditional settings, the forward tracing analyzes the influence a symptom node would have on the victim computer system, and the backward tracing discovers the starting node where the vulnerabilities exist [270].", + "bbox": [ + 86, + 344, + 905, + 411 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "We argue that the existing tracing scope is too limited to handle increasingly complicated intrusions and DL-IDS can be defined more broadly. In addition to investigating scenario graphs of intrusions, DL-IDS are supposed to further investigate why these intrusions occur and how to hold back them. The broader definition introduces more downstream tasks that would be difficult to accomplish without the assistance of DL techniques. Based on Definition 3.3, we reformulate the definition of intrusion in a broad sense for DL-IDS as follows:", + "bbox": [ + 86, + 411, + 909, + 509 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Definition 7.1. (Generalized Intrusion). Generalized intrusion is the malicious attempts against a computer, a network, or the corresponding security facilities, whose attributes encompass not only itself but also its underlying root causes and the relevant control measures.", + "bbox": [ + 86, + 518, + 909, + 568 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "In this way, the detection of DL-IDS has been extended to the broadly defined intrusions, including their attributes of both root causes and control measures. When executing backward tracing analysis, DL-IDS are not only required to detect the starting symptom nodes of intrusions, but also required to find the root causes of these symptom nodes (i.e., vulnerabilities in source codes). In the forward tracing analysis, except for detecting the symptom nodes affected by intrusions, DL-IDS should perform an in-depth analysis to discover the potentially compromised nodes and provide control measures for handling intrusions.", + "bbox": [ + 86, + 573, + 907, + 690 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Thankfully, several pioneering works have studied similar problems [25, 144]. In AiVl [25], algorithms to bridge log entries and program models are developed using dynamic-static program analysis. Root causes for the exploited vulnerabilities are capable of directly deriving from intrusion detection. Pedro et al. [144] investigate detection and mitigation methods for DDoS attacks, aiming to control them immediately. Additionally, semi-automated adaptive network defense (SAND) [26] leverages SDN to dynamically generate and deploy defense rules. We note that these research attempts are all based on heuristics, either using pre-defined rules to generate root causes, or developing control measures for specific intrusions. Thus, there is a substantial need to introduce advanced DL techniques to this problem.", + "bbox": [ + 86, + 691, + 909, + 841 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "7.3.2 Concerns about Data-driven Adversarial Attacks. To validate the detection performance, DL-IDS commonly idealize the experimental data in their threat model. Such idealization, however, leaves DL-IDS with weaknesses that could be exploited by invaders. For example, a common assumption is that no attacks are considered to compromise the security of the log collection", + "bbox": [ + 86, + 848, + 911, + 915 + ], + "page_idx": 24 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 81, + 504, + 95 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "1:25", + "bbox": [ + 876, + 84, + 907, + 95 + ], + "page_idx": 24 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 907, + 947 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "systems [76, 79, 99, 182], namely log data utilized in DL-IDS is absolutely harmless. But as attacks become more stealthy and complicated, it is impossible to satisfy such an assumption apparently. When DL-IDS encounter intentional data poisoning attacks, prediction backdoors could be easily planted as persistent vulnerabilities.", + "bbox": [ + 86, + 116, + 905, + 183 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "The robustness of DL-IDS is also challenged by data-driven evasion attacks. To evade the detection, the malicious behaviors usually mimic the benign ones (a.k.a., mimicry attacks), making them hard to be detected. By early 2002, David et al. [215] have indicated the danger of mimicry attacks on HIDS. Recently, researchers have started to investigate mimicry attacks on DL-IDS [64, 132, 161] and their studies all present effective evasion of detection. From a study [24], DL-IDS can be even plagued by a trivial perturbation in log data. Aware of this issue, R-caid [65] proposes to embed root causes into the detection model for countering adversarial attacks. However, as noted in recent work [64, 65, 161], data-driven attacks still remain a major challenge for DL-IDS.", + "bbox": [ + 86, + 184, + 907, + 316 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "7.3.3 Underexplored Promising Scenarios. While DL-IDS show excellent performance in the protection of computer and network systems recently, there are still many promising scenarios for DL-IDS that have not been explored sufficiently.", + "bbox": [ + 86, + 326, + 907, + 375 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Mobile edge computing (MEC) [1, 117, 147] is a typical scenario. In the MEC environment, mobile computing, network control, and storage are pushed at the network edges so as to enable computation-intensive tasks at the resource-limited devices. At the network edges, devices such as Unmanned Aerial Vehicles (UAVs) and New Energy Vehicles (NEVs) usually lack computing power and security facilities, making it difficult to prevent them from intrusions [198]. In the meantime, containerized deployment has become one of the dominant ways to deploy microservices. Detecting intrusions on containers is thus of great importance, for which ReplicaWatcher [46] is a representative work with a special design for microservices. Additionally, industrial networks are characterized by high fidelity, stability, and real-time responsiveness [110], leading to challenges in adapting DL-IDS to their infrastructures.", + "bbox": [ + 86, + 377, + 909, + 541 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "7.3.4 Future Directions. Although there has been plenty of research on DL-IDS, many applications and scenarios remain underdeveloped. DL-IDS are sought to be more broadly defined and applied. Based on the above discussion, we briefly summarize the future directions as follows:", + "bbox": [ + 86, + 551, + 907, + 602 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Future Directions", + "text_level": 1, + "bbox": [ + 158, + 613, + 325, + 627 + ], + "page_idx": 25 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Extending the scope of forward tracing and backward tracing to intrusions in a broad sense, so that generating root causes and control measures for the broadly defined intrusions.", + "- Understanding data-driven adversarial attacks such as data poisoning attacks and mimicry attacks for devising more robust DL-IDS.", + "- Applying DL-IDS widely in more underexplored promising scenarios, and if possible, implementing unified frameworks for them." + ], + "bbox": [ + 158, + 638, + 837, + 752 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "8 CONCLUSION", + "text_level": 1, + "bbox": [ + 88, + 777, + 263, + 791 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "The DL techniques bring reform to IDS, whose generalizability enables them to detect intrusions that have never been encountered before. Recognizing that the IDS development over the past decade primarily comes from DL-IDS, this survey revisits the common workflow for DL-IDS, elaborates each module in the workflow, and taxonomizes the research papers innovatively based on their DL techniques. Publicly available datasets for stimulating future research are introduced subsequently. In addition, from the perspective of DL, this survey digs deep into the potential challenges, emerging trends, and future directions for DL-IDS. The discussions suggest to us that", + "bbox": [ + 86, + 800, + 905, + 913 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "1:26", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 25 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 81, + 907, + 95 + ], + "page_idx": 25 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 514, + 945 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "DL-IDS are, fascinatingly, in an underdeveloped state. We hope that this survey can somewhat inspire current researchers and facilitate future investigations on DL-IDS.", + "bbox": [ + 86, + 118, + 907, + 151 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "ACKNOWLEDGMENTS", + "text_level": 1, + "bbox": [ + 88, + 165, + 316, + 179 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "This research is sponsored in part by the NSFC program (No. 6212780016 and No. 62021002).", + "bbox": [ + 86, + 186, + 866, + 202 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "REFERENCES", + "text_level": 1, + "bbox": [ + 92, + 215, + 224, + 229 + ], + "page_idx": 26 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Nasir Abbas, Yan Zhang, Amir Taherkordi, and Tor Skeie. 2017. Mobile Edge Computing: A Survey. IEEE Internet of Things Journal 5, 1 (2017), 450-465.", + "[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023).", + "[3] Amey Agrawal, Rohit Karlupia, and Rajat Gupta. 2019. Logan: A Distributed Online Log Parser. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering. IEEE, 1946-1951.", + "[4] Zeeshan Ahmad, Adnan Shahid Khan, Cheah Wai Shiang, Johari Abdullah, and Farhan Ahmad. 2021. Network Intrusion Detection System: A Systematic Study of Machine Learning and Deep Learning Approaches. Transactions on Emerging Telecommunications Technologies 32, 1 (2021), e4150.", + "[5] Farrukh Ahmed, Urooj Jahangir, Hamad Rahim, Kamran Ali, et al. 2020. Centralized Log Management Using Elasticsearch, Logstash and Kibana. In Proceedings of the 2020 International Conference on Information Science and Communication Technology. IEEE, 1-7.", + "[6] Mohannad Alhanahnah, Shiqing Ma, Ashish Gehani, Gabriela F Ciocarlie, Vinod Yegneswaran, Somesh Jha, and Xiangyu Zhang. 2022. autoMPI: Automated Multiple Perspective Attack Investigation with Semantics Aware Execution Partitioning. IEEE Transactions on Software Engineering 49, 4 (2022), 2761-2775.", + "[7] Tarek Ali. 2024. Next-Generation Intrusion Detection Systems with LLMs: Real-Time Anomaly Detection, Explainable AI, and Adaptive Data Generation. Master's thesis. T. Ali.", + "[8] Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Computing Surveys 51, 4 (2018), 1-37.", + "[9] Abdulellah Alsaheel, Yuhong Nan, Shiqing Ma, Le Yu, Gregory Walkup, Z Berkay Celik, Xiangyu Zhang, and Dongyan Xu. 2021. ATLAS: A Sequence-based Learning Approach for Attack Investigation. In Proceedings of the 30th USENIX Security Symposium. 3005-3022.", + "[10] Adel Alshamrani, Sowmya Myneni, Ankur Chowdhary, and Dijiang Huang. 2019. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Communications Surveys and Tutorials 21, 2 (2019), 1851-1877. https://doi.org/10.1109/COMST.2019.2891891", + "[11] Enes Altinisik, Fatih Deniz, and Hürev Taha Sencar. 2023. ProvG-Searcher: A Graph Representation Learning Approach for Efficient Provenance Graph Search. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2247-2261.", + "[12] Clarivate Analytics. 1997. Web of Science. https://www.webofscience.com", + "[13] Md Monowar Anjum, Shahrear Iqbal, and Benoit Hamelin. 2021. Analyzing the Usefulness of the DARPA OpTC Dataset in Cyber Threat Detection Research. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 27-32.", + "[14] Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised Speech Recognition. Advances in Neural Information Processing Systems 34 (2021), 27826-27839.", + "[15] Elizabeth Bautista, Nitin Sukhija, and Siqi Deng. 2022. Shasta Log Aggregation, Monitoring and Alerting in HPC Environments with Grafana Loki and ServiceNow. In Proceedings of the 2022 IEEE International Conference on Cluster Computing. IEEE, 602-610.", + "[16] Jack Beerman, David Berent, Zach Falter, and Suman Bhunia. 2023. A Review of Colonial Pipeline Ransomware Attack. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops. IEEE, 8-15.", + "[17] Tristan Bilot, Nour El Madhoun, Khaldoun Al Agha, and Anis Zouaoui. 2023. Graph Neural Networks for Intrusion Detection: A Survey. IEEE Access 11 (2023), 49114-49139.", + "[18] Tristan Bilot, Baoxiang Jiang, Zefeng Li, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, and Thomas Pasquier. 2025. Sometimes Simpler is Better: A Comprehensive Analysis of State-of-the-Art Provenance-Based Intrusion Detection Systems. In 34th USENIX Security Symposium (USENIX Security 25). 7193-7212.", + "[19] Peter Bodik, Moises Goldszmidt, Armando Fox, Dawn B Woodard, and Hans Andersen. 2010. Fingerprinting the Datacenter: Automated Classification of Performance Crises. In Proceedings of the 5th European Conference on Computer Systems. 111-124." + ], + "bbox": [ + 98, + 234, + 909, + 913 + ], + "page_idx": 26 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 88, + 83, + 504, + 95 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "1:27", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 26 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 934, + 905, + 947 + ], + "page_idx": 26 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[20] Carolin E Brandt, Annibale Panichella, Andy Zaidman, and Moritz Beller. 2020. LogChunks: A Data Set for Build Log Analysis. In Proceedings of the 17th International Conference on Mining Software Repositories. 583-587.", + "[21] Robert A Bridges, Tarrah R Glass-Vanderlan, Michael D Iannacone, Maria S Vincent, and Qian Chen. 2019. A Survey of Intrusion Detection Systems Leveraging Host Data. ACM computing surveys 52, 6 (2019), 1-35.", + "[22] Dainius Čeponis and Nikolaj Goranin. 2018. Towards A Robust Method of Dataset Generation of Malicious Activity for Anomaly-Based HIDS Training and Presentation of AWSCTD Dataset. *Baltic Journal of Modern Computing* 6, 3 (2018), 217-234.", + "[23] Xiaolin Chai, Hang Zhang, Jue Zhang, Yan Sun, and Sajal K Das. 2024. Log Sequence Anomaly Detection based on Template and Parameter Parsing via BERT. IEEE Transactions on Dependable and Secure Computing (2024).", + "[24] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. arXiv preprint arXiv:1810.00069 (2018).", + "[25] Changhua Chen, Tingzhen Yan, Chenxuan Shi, Hao Xi, Zhirui Fan, Hai Wan, and Xibin Zhao. 2024. The Last Mile of Attack Investigation: Audit Log Analysis towards Software Vulnerability Location. IEEE Transactions on Information Forensics and Security (2024).", + "[26] Haoyu Chen, Deqing Zou, Hai Jin, Shouhuai Xu, and Bin Yuan. 2022. SAND: Semi-Automated Adaptive Network Defense via Programmable Rule Generation and Deployment. Science China Information Sciences 65, 7 (2022), 172102.", + "[27] Tao Chen, Haiyan Suo, and Wenqian Xu. 2023. Design of Log Collection Architecture Based on Cloud Native Technology. In Proceedings of the 2023 4th Information Communication Technologies Conference. IEEE, 311-315.", + "[28] Wenrui Cheng, Qixuan Yuan, Tiantian Zhu, Tieming Chen, Jie Ying, Aohan Zheng, Mingjun Ma, Chunlin Xiong, Mingqi Lv, and Yan Chen. 2025. TAGAPT: Towards Automatic Generation of APT Samples with Provenance-level Granularity. IEEE Transactions on Information Forensics and Security (2025).", + "[29] Zijun Cheng, Qiujian Lv, Jinyuan Liang, Yan Wang, Degang Sun, Thomas Pasquier, and Xueyuan Han. 2024. Kairos: Practical Intrusion Detection and Investigation Using Whole-System Provenance. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3533–3551.", + "[30] Guojun Chu, Jingyu Wang, Qi Qi, Haifeng Sun, Shimin Tao, and Jianxin Liao. 2021. Prefix-Graph: A Versatile Log Parsing Approach Merging Prefix Tree with Probabilistic Graph. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering. IEEE, 2411-2422.", + "[31] The MITRE Corporation. 2025. CVE List. https://github.com/CVEProject/cvelistV5/archive/refs/heads/main.zip", + "[32] Oihana Coustie, Josiane Mothe, Olivier Teste, and Xavier Baril. 2020. METING: A Robust Log Parser Based on Frequent n-Gram Mining. In Proceedings of the 2020 IEEE International Conference on Web Services. IEEE, 84-88.", + "[33] Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, and Xiaojing Liao. 2024. Tweezers: A Framework for Security Event Detection via Event Attribution-centric Tweet Embedding. In Proceedings of the Network and Distributed System Security Symposium.", + "[34] Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, and Hugh Leather. 2025. LLM Compiler: Foundation Language Models for Compiler Optimization. In Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction. 141-153.", + "[35] Hetong Dai, Heng Li, Che-Shao Chen, Weiyi Shang, and Tse-Hsun Chen. 2020. Logram: Efficient Log Parsing Using $n$ -Gram Dictionaries. IEEE Transactions on Software Engineering 48, 3 (2020), 879-892.", + "[36] Hetong Dai, Yiming Tang, Heng Li, and Weiyi Shang. 2023. PILAR: Studying and Mitigating the Influence of Configurations on Log Parsing. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 818-829.", + "[37] DARPA. 2019. Operationally Transparent Cyber Dataset. https://github.com/FiveDirections/OpTC-data", + "[38] DARPA. 2022. The DARPA Transparent Computing (TC) program Data Release. https://github.com/darpa-i2o/Transparent-Computing", + "[39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186.", + "[40] Hailun Ding, Juan Zhai, Dong Deng, and Shiqing Ma. 2023. The Case for Learned Provenance Graph Storage Systems. In Proceedings of the 32nd USENIX Security Symposium. 3277-3294.", + "[41] Hailun Ding, Juan Zhai, Yuhong Nan, and Shiqing Ma. 2023. AirTag: Towards Automated Attack Investigation by Unsupervised Learning with Log Texts. In Proceedings of the 32nd USENIX Security Symposium. 373-390.", + "[42] Feng Dong, Liu Wang, Xu Nie, Fei Shao, Haoyu Wang, Ding Li, Xiapu Luo, and Xusheng Xiao. 2023. DistDet: A Cost-Effective Distributed Cyber Threat Detection System. In Proceedings of the 32nd USENIX Security Symposium. 6575–6592.", + "[43] Ying Dong, Yuqing Zhang, Hua Ma, Qianru Wu, Qixu Liu, Kai Wang, and Wenjie Wang. 2018. An Adaptive System for Detecting Malicious Queries in Web Attacks. Science China Information Sciences 61, 3 (2018), 032114." + ], + "bbox": [ + 98, + 120, + 907, + 895 + ], + "page_idx": 27 + }, + { + "type": "page_number", + "text": "1:28", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 27 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 27 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 88, + 933, + 512, + 945 + ], + "page_idx": 27 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[44] Min Du and Feifei Li. 2016. Spell: Streaming Parsing of System Event Logs. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining. IEEE, 859-864.", + "[45] Min Du, Feifei Li, Guineng Zheng, and Vivek Srikumar. 2017. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1285-1298.", + "[46] Asbat El Khairi, Marco Caselli, Andreas Peter, and Andrea Continella. 2024. REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices. In Proceedings of the Network and Distributed System Security Symposium.", + "[47] Elastic. 2009. Logstash: Collect, parse, and transform logs. https://www.elastic.co/logstash/", + "[48] Elastic. 2010. Elasticsearch: The official distributed search & analytics engine. https://www.elastic.co/elasticsearch/", + "[49] Elastic. 2013. Kibana: Explore, visualize, and discover data. https://www.elastic.co/kibana/", + "[50] Elsevier. 2021. Scopus. https://www.scopus.com/search/form.uri?display=basic{\\#}basic", + "[51] Dave Evans. 2012. The Internet of Everything: How More Relevant and Valuable Connections will Change the World. Cisco IBSG 2012 (2012), 1-9.", + "[52] Pengcheng Fang, Peng Gao, Changlin Liu, Erman Ayday, Kangkook Jee, Ting Wang, Yanfang Fanny Ye, Zhuotao Liu, and Xusheng Xiao. 2022. Back-Propagating System Dependency Impact for Attack Investigation. In Proceedings of the 31st USENIX Security Symposium. 2461–2478.", + "[53] Peng Fei, Zhou Li, Zhiying Wang, Xiao Yu, Ding Li, and Kangkook Jee. 2021. SEAL: Storage-Efficient Causality Analysis on Enterprise Logs with Query-Friendly Compression. In Proceedings of the 30th USENIX Security Symposium.", + "[54] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1536-1547.", + "[55] Free Software Foundation. 1992. gzip: GNU zip compression utility. https://www.gnu.org/software/gzip/", + "[56] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2021. Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 3431-3446.", + "[57] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2024. Detecting Tunnelled Flooding Traffic via Deep Semantic Analysis of Packet Length Patterns. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3659-3673.", + "[58] Chuanpu Fu, Qi Li, Ke Xu, and Jianping Wu. 2023. Point Cloud Analysis for ML-based Malicious Traffic Detection: Reducing Majorities of False Positive Alarms. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 1005-1019.", + "[59] Oscar G. Lira, Alberto Marroquin, and Marco Antonio To. 2024. Harnessing the Advanced Capabilities of LLM for Adaptive Intrusion Detection Systems. In Proceedings of the International Conference on Advanced Information Networking and Applications. Springer, 453-464.", + "[60] Peng Gao, Xusheng Xiao, Zhichun Li, Fengyuan Xu, Sanjeev R Kulkarni, and Prateek Mittal. 2018. AIQL: Enabling Efficient Attack Investigation from System Monitoring Data. In Proceedings of the 2018 USENIX Annual Technical Conference. 113-126.", + "[61] Ashish Gehani and Dawood Tariq. 2012. SPADE: Support for Provenance Auditing in Distributed Environments. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing. Springer, 101-120.", + "[62] Jalal Ghadermazi, Soumyadeep Hore, Ankit Shah, and Nathaniel D Bastian. 2025. GTAE-IDS: Graph Transformer-Based Autoencoder Framework for Real-Time Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025).", + "[63] Joshua Glasser and Brian Lindauer. 2013. Bridging the gap: A Pragmatic Approach to Generating Insider Threat Data. In Proceedings of the IEEE Symposium on Security and Privacy Workshops. IEEE, 98-104.", + "[64] Akul Goyal, Xueyuan Han, Gang Wang, and Adam Bates. 2023. Sometimes, You Aren't What You Do: Mimicry Attacks Against Provenance Graph Host Intrusion Detection Systems. In Proceedings of the Network and Distributed System Security Symposium.", + "[65] Akul Goyal, Gang Wang, and Adam Bates. 2024. R-caid: Embedding Root Cause Analysis within Provenance-Based Intrusion Detection. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3515-3532.", + "[66] Brendan Gregg and Jim Mauro. 2011. DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD. Prentice Hall Professional.", + "[67] John Griffith, Derrick Kong, Armando Caro, Brett Benyo, Joud Khoury, Timothy Upthegrove, Timothy Christovich, Stanislav Ponomorov, Ali Sydney, Arjun Saini, et al. 2020. Scalable Transparency Architecture for Research Collaboration (STARC)-DARPA Transparent Computing (TC) Program. *Raytheon BBN Technologies Corporation Cambridge United States* (2020).", + "[68] Steve Grubb. 2008. Linux audit. https://people.redhat.com/sgrubb/audit/" + ], + "bbox": [ + 98, + 119, + 907, + 909 + ], + "page_idx": 28 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 28 + }, + { + "type": "page_number", + "text": "1:29", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 28 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 945 + ], + "page_idx": 28 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[69] Qiuhan Gu. 2023. LLM-Based Code Generation Method for Golang Compiler Testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2201-2203.", + "[70] Xiaodong Gu, Meng Chen, Yalan Lin, Yuhan Hu, Hongyu Zhang, Chengcheng Wan, Zhao Wei, Yong Xu, and Juhong Wang. 2025. On the Effectiveness of Large Language Models in Domain-Specific Code Generation. ACM Transactions on Software Engineering and Methodology 34, 3 (2025), 1-22.", + "[71] Yiwei Guo, Chenpeng Du, Ziyang Ma, Xie Chen, and Kai Yu. 2024. Voiceflow: Efficient Text-to-Speech with Rectified Flow Matching. In Proceedings of the ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 11121-11125.", + "[72] Yi Guo, Fu Miao, Liancheng Zhang, and Yu Wang. 2019. CATH: An Effective Method for Detecting Denial-of-Service Attacks in Software Defined Networks. Science China Information Sciences 62, 3 (2019), 32106.", + "[73] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. Advances in Neural Information Processing Systems 30 (2017).", + "[74] Hossein Hamooni, Biplob Debnath, Jianwu Xu, Hui Zhang, Guofei Jiang, and Abdullah Mueen. 2016. LogMine: Fast Pattern Recognition for Log Analytics. In Proceedings of the ACM International on Conference on Information and Knowledge Management. 1573-1582.", + "[75] Dongqi Han, Zhiliang Wang, Wenqi Chen, Kai Wang, Rui Yu, Su Wang, Han Zhang, Zhihua Wang, Minghui Jin, Jiahai Yang, et al. 2023. Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation. In Proceedings of the Network and Distributed Systems Security Symposium.", + "[76] Xueyuan Han, Thomas Pasquier, Adam Bates, James Mickens, and Margo Seltzer. 2020. *Unicorn: Runtime Provenance-Based Detector for Advanced Persistent Threats*. In *Proceedings of the Network and Distributed Systems Security Symposium*.", + "[77] Wajih Ul Hassan, Lemay Aguse, Nuraini Aguse, Adam Bates, and Thomas Moyer. 2018. Towards Scalable Cluster Auditing through Grammatical Inference over Provenance Graphs. In Proceedings of the Network and Distributed Systems Security Symposium.", + "[78] Wajih Ul Hassan, Adam Bates, and Daniel Marino. 2020. Tactical Provenance Analysis for Endpoint Detection and Response Systems. In Proceedings of the 2020 IEEE symposium on security and privacy. IEEE, 1172-1189.", + "[79] Wajih Ul Hassan, Shengjian Guo, Ding Li, Zhengzhang Chen, Kangkook Jee, Zhichun Li, and Adam Bates. 2019. Nodoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Network and Distributed System Security Symposium.", + "[80] Wajih Ul Hassan, Mohammad Ali Noureddine, Pubali Datta, and Adam Bates. 2020. OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-Layer Log Analysis. In Proceedings of the Network and Distributed System Security Symposium.", + "[81] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. 2022. Masked Autoencoders are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000-16009.", + "[82] Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R Lyu. 2017. Drain: An Online Log Parsing Approach with Fixed Depth Tree. In Proceedings of the 2017 IEEE International Conference on Web Services. IEEE, 33-40.", + "[83] Shilin He, Pinjia He, Zhuangbin Chen, Tianyi Yang, Yuxin Su, and Michael R. Lyu. 2020. A Survey on Automated Log Analysis for Reliability Engineering. ACM Computing Surveys 54 (2020), 1 - 37. https://api-semanticscholar.org/CorpusID:221703032", + "[84] Xinlei He, Guowen Xu, Xingshuo Han, Qian Wang, Lingchen Zhao, Chao Shen, Chenhao Lin, Zhengyu Zhao, Qian Li, Le Yang, et al. 2025. Artificial intelligence security and privacy: a survey. Science China Information Sciences 68, 8 (2025), 1-90.", + "[85] Abram Hindle, Earl T Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the Naturalness of Software. Commun. ACM 59, 5 (2016), 122-131.", + "[86] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735-1780.", + "[87] Josef Horalek, Patrik Urbanik, Vladimir Sobeslav, and Tomas Svoboda. 2022. Proposed Solution for Log Collection and Analysis in Kubernetes Environment. In Proceedings of the International Conference on Nature of Computation and Communication. Springer, 9-22.", + "[88] Md Nahid Hossain, Sadegh M Milajerdi, Junao Wang, Birhanu Eshete, Rigel Gjomemo, R Sekar, Scott Stoller, and VN Venkatakrishnan. 2017. Sleuth: Real-time Attack Scenario Reconstruction from COTS Audit Data. In Proceedings of the USENIX Security Symposium. 487-504.", + "[89] Md Nahid Hossain, Junao Wang, Ofir Weisse, R Sekar, Daniel Genkin, Boyuan He, Scott D Stoller, Gan Fang, Frank Piessens, Evan Downing, et al. 2018. Dependence-Preserving Data Compaction for Scalable Forensic Analysis. In Proceedings of the 27th USENIX Security Symposium. 1723-1740." + ], + "bbox": [ + 98, + 119, + 909, + 895 + ], + "page_idx": 29 + }, + { + "type": "page_number", + "text": "1:30", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 29 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 29 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 88, + 933, + 512, + 945 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[90] Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. 2022. GraphMAE: Self-Supervised Masked Graph Autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594-604.", + "[91] Kevin Hsieh, Mike Wong, Santiago Segarra, Sathiya Kumaran Mani, Trevor Eberl, Anatoliy Panasyuk, Ravi Netravali, Ranveer Chandra, and Srikanth Kandula. 2024. NetVigil: Robust and Low-Cost Anomaly Detection for East-West Data Center Security. In Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation. 1771-1789.", + "[92] Peiwei Hu, Ruigang Liang, and Kai Chen. 2024. DeGPT: Optimizing Decompile Output with LLM. In Proceedings of the Network and Distributed System Security Symposium.", + "[93] Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, and Yinzhi Cao. 2024. Pleak: Prompt Leaking Attacks Against Large Language Model Applications. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3600-3614.", + "[94] Yintong Huo, Yichen Li, Yuxin Su, Pinjia He, Zifan Xie, and Michael R Lyu. 2023. AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection. In Proceedings of the 2023 38th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 497-509.", + "[95] IEEE. 2000. IEEE Xplore Digital Library. https://ieeexplore.ieee.org", + "[96] Muhammad Adil Inam, Yinfang Chen, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih Ul Hassan. 2023. SoK: History is a Vast Early Warning System: Auditing the Provenance of System Intrusions. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. 2620-2638. https://doi.org/10.1109/SP46215.2023.10179405", + "[97] Muhammad Adil Inam, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih UI Hassan. 2022. FAuST: Striking A Bargain between Forensic Auditing's Security and Throughput. In Proceedings of the 38th Annual Computer Security Applications Conference. 813-826.", + "[98] Yang Ji, Sangho Lee, Evan Downing, Weiren Wang, Mattia Fazzini, Taesoo Kim, Alessandro Orso, and Wenke Lee. 2017. Rain: Refinable Attack Investigation with On-demand Inter-Process Information Flow Tracking. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 377–390.", + "[99] Zian Jia, Yun Xiong, Yuhong Nan, Yao Zhang, Jinjing Zhao, and Mi Wen. 2024. MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning. In Proceedings of the 33rd USENIX Security Symposium. 5197-5214.", + "[100] Baoxiang Jiang, T Bilot, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, Shahrear Iqbal, Xueyuan Han, and Thomas Pasquier. 2025. Orthrus: Achieving High Quality of Attribution in Provenance-based Intrusion Detection Systems. In Proceedings of the USENIX Security Symposium.", + "[101] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410 (2016).", + "[102] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 (2020).", + "[103] Alexander D. Kent. 2015. Comprehensive, Multi-Source Cyber-Security Events. Los Alamos National Laboratory. https://doi.org/10.17021/1179829", + "[104] LG Kersta, PD Bricker, and EE David Jr. 1960. Human or Machine?—A Study of Voice Naturalness. The Journal of the Acoustical Society of America 32, 11_Supplement (1960), 1502-1502.", + "[105] Ansam Khraisat, Iqbal Gondal, Peter Vamplew, and Joarder Kamruzzaman. 2019. Survey of Intrusion Detection Systems: Techniques, Datasets and Challenges. Cybersecurity 2, 1 (2019), 1-22.", + "[106] Aaron Kili. [n.d.]. Sysdig-A Powerful System Monitoring and Troubleshooting Tool for Linux.", + "[107] Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. \"I'm Not Sure, But...\": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 822-835.", + "[108] Isaiah J King and H Howie Huang. 2023. Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction. ACM Transactions on Privacy and Security 26, 3 (2023), 1-36.", + "[109] Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv preprint arXiv:1611.07308 (2016).", + "[110] Eric D Knapp. 2024. Industrial Network Security: Securing Critical Infrastructure Networks for Smart Grid, SCADA, and other Industrial Control Systems. Elsevier.", + "[111] Yonghwi Kwon, Fei Wang, Weihang Wang, Kyu Hyung Lee, Wen-Chuan Lee, Shiqing Ma, Xiangyu Zhang, Dongyan Xu, Somesh Jha, Gabriela Ciocarlie, et al. 2018. MCI: Modeling-based Causality Inference in Audit Logging for Attack Investigation. In Proceedings of the Network and Distributed Systems Security Symposium.", + "[112] Grafana Labs. 2014. Grafana: The Open Observability Platform. https://grafana.com/", + "[113] Van-Hoang Le and Hongyu Zhang. 2021. Log-Based Anomaly Detection without Log Parsing. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 492-504." + ], + "bbox": [ + 90, + 119, + 909, + 909 + ], + "page_idx": 30 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 30 + }, + { + "type": "page_number", + "text": "1:31", + "bbox": [ + 876, + 84, + 905, + 95 + ], + "page_idx": 30 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 947 + ], + "page_idx": 30 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[114] Van-Hoang Le and Hongyu Zhang. 2023. Log Parsing with Prompt-Based Few-Shot Learning. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 2438-2449.", + "[115] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. High Accuracy Attack Provenance via Binary-based Execution Partition. In Proceedings of the Network and Distributed System Security Symposium, Vol. 16.", + "[116] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. LogGC: Garbage Collecting Audit Log. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security. 1005-1016.", + "[117] Huanruo Li, Yunfei Guo, Shumin Huo, Hongchao Hu, and Penghao Sun. 2022. Defensive Deception Framework Against Reconnaissance Attacks in the Cloud with Deep Reinforcement Learning. Science China Information Sciences 65, 7 (2022), 170305.", + "[118] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ConLBS: An Attack Investigation Approach Using Contrastive Learning with Behavior Sequence. Sensors 23, 24 (2023), 9881.", + "[119] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ProvGRP: A Context-Aware Provenance Graph Reduction and Partition Approach for Facilitating Attack Investigation. *Electronics* 13, 1 (2023), 100.", + "[120] Shaofei Li, Feng Dong, Xusheng Xiao, Haoyu Wang, Fei Shao, Jiedong Chen, Yao Guo, Xiangqun Chen, and Ding Li. 2024. NodLink: An Online System for Fine-Grained APT Attack Detection and Investigation. In Proceedings of the Network and Distributed System Security Symposium.", + "[121] Teng Li, Jianfeng Ma, and Cong Sun. 2017. NetPro: Detecting Attacks in MANET Routing with Provenance and Verification. Science China Information Sciences 60, 11 (2017), 118101.", + "[122] Xiaoxiang Li, Xinyu Jiang, Hai Wan, and Xinbin Zhao. 2025. TeRed: Normal Behavior-Based Efficient Provenance Graph Reduction for Large-Scale Attack Forensics. IEEE Transactions on Information Forensics and Security (2025).", + "[123] Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, and Pengfei Chen. 2024. LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1-12.", + "[124] Yujia Li, David Choi, Junyoung Chung, Nate Kushner, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-Level Code Generation with Alphacode. Science 378, 6624 (2022), 1092-1097.", + "[125] Yanjie Li, Zhen Xiang, Nathaniel D Bastian, Dawn Song, and Bo Li. 2024. IDS-Agent: An LLM Agent for Explanable Intrusion Detection in IoT Networks. In Proceedings of the NeurIPS 2024 Workshop on Open-World Agents.", + "[126] Yuanlin Li, Zhiwei Xu, Min Zhou, Hai Wan, and Xibin Zhao. 2024. Trident: Detecting SQL Injection Attacks via Abstract Syntax Tree-based Neural Network. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 2225-2229.", + "[127] Zhenyuan Li, Qi Alfred Chen, Runqing Yang, Yan Chen, and Wei Ruan. 2021. Threat Detection and Investigation with System-Level Provenance Graphs: A Survey. Computer and Security 106, C (jul 2021), 16 pages. https://doi.org/10.1016/j.cose.2021.102282", + "[128] Hung-Jen Liao, Chun-Hung Richard Lin, Ying-Chih Lin, and Kuang-Yuan Tung. 2013. Intrusion Detection System: A Comprehensive Review. Journal of Network and Computer Applications 36, 1 (2013), 16-24.", + "[129] Soo Yee Lim, Bogdan Stelea, Xueyuan Han, and Thomas Pasquier. 2021. Secure Namespaced Kernel Audit for Containers. In Proceedings of the ACM Symposium on Cloud Computing. 518-532.", + "[130] Qingwei Lin, Hongyu Zhang, Jian-Guang Lou, Yu Zhang, and Xuewei Chen. 2016. Log Clustering Based Problem Identification for Online Service Systems. In Proceedings of the International Conference on Software Engineering Companion. 102-111.", + "[131] Brian Lindauer. 2020. Insider Threat Test Dataset. (9 2020). https://doi.org/10.1184/R1/12841247.v1", + "[132] Guangrui Liu, Weizhe Zhang, Xinjie Li, Kaisheng Fan, and Shui Yu. 2022. VulnERGAN: A Backdoor Attack through Vulnerability Amplification against Machine Learning-Based Network Intrusion Detection Systems. Science China Information Sciences 65, 7 (2022), 170303.", + "[133] Jason Liu, Muhammad Adil Inam, Akul Goyal, Andy Riddle, Kim Westfall, and Adam Bates. 2025. What We Talk About When We Talk About Logs: Understanding the Effects of Dataset Quality on Endpoint Threat Detection Research. In Proceedings of the 2025 IEEE Symposium on Security and Privacy. IEEE, 112-129.", + "[134] Jian Liu, Junjie Yan, Zhengwei Jiang, Xuren Wang, and Jun Jiang. 2022. A Graph Learning Approach with Audit Records for Advanced Attack Investigation. In Proceedings of the IEEE Global Communications Conference. IEEE, 897-902.", + "[135] Jinyang Liu, Jieming Zhu, Shilin He, Pinjia He, Zibin Zheng, and Michael R Lyu. 2019. Logzip: Extracting Hidden Structures via Iterative Clustering for Log Compression. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 863-873.", + "[136] Shuai Liu, Yiheng Pan, Kun Hong, Ruite Fei, Chenhao Lin, Qian Li, and Chao Shen. 2025. Backdoor Threats in Large Language Models—A Survey. Science China Information Sciences 68, 9 (2025), 1-34." + ], + "bbox": [ + 90, + 119, + 907, + 895 + ], + "page_idx": 31 + }, + { + "type": "page_number", + "text": "1:32", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 31 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 31 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 512, + 945 + ], + "page_idx": 31 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[137] Yudong Liu, Xu Zhang, Shilin He, Hongyu Zhang, Liquin Li, Yu Kang, Yong Xu, Minghua Ma, Qingwei Lin, Yingnong Dang, et al. 2022. UniParser: A Unified Log Parser for Heterogeneous Log Data. In Proceedings of the ACM Web Conference. 1893-1901.", + "[138] Scott Lupton, Hironori Washizaki, Nobukazu Yoshioka, and Yoshiaki Fukazawa. 2021. Literature Review on Log Anomaly Detection Approaches Utilizing Online Parsing Methodology. In Proceedings of the 2021 28th Asia-Pacific Software Engineering Conference. 559-563. https://doi.org/10.1109/APSEC53868.2021.00068", + "[139] Mingqi Lv, HongZhe Gao, Xuebo Qiu, Tieming Chen, Tiantian Zhu, Jinyin Chen, and Shouling Ji. 2024. TREC: APT Tactic/Technique Recognition via Few-Shot Provenance Subgraph Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 139-152.", + "[140] Yang Lv, Shaona Qin, Zifeng Zhu, Zhuocheng Yu, Shudong Li, and Weihong Han. 2022. A Review of Provenance Graph based APT Attack Detection: Applications and Developments. In Proceedings of the 2022 7th IEEE International Conference on Data Science in Cyberspace. 498-505. https://doi.org/10.1109/DSC55868.2022.00075", + "[141] Shiqing Ma, Juan Zhai, Yonghwi Kwon, Kyu Hyung Lee, Xiangyu Zhang, Gabriela Ciocarlie, Ashish Gehani, Vinod Yegneswaran, Dongyan Xu, and Somesh Jha. 2018. Kernel-Supported Cost-Effective Audit Logging for Causality Tracking. In Proceedings of the 2018 USENIX Annual Technical Conference. 241-254.", + "[142] Shiqing Ma, Juan Zhai, Fei Wang, Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2017. MPI: Multiple Perspective Attack Investigation with Semantic Aware Execution Partitioning. In Proceedings of the 26th USENIX Security Symposium. 1111-1128.", + "[143] Shiqing Ma, Xiangyu Zhang, and Dongyan Xu. 2016. ProTracer: Towards Practical Provenance Tracing by Alternating between Logging and Tainting. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium.", + "[144] Pedro Manso, José Moura, and Carlos Serrão. 2019. SDN-Based Intrusion Detection System for Early Detection and Mitigation of DDoS Attacks. Information 10, 3 (2019), 106.", + "[145] Emaad Manzoor, Sadegh M Milajerdi, and Leman Akoglu. 2016. Fast Memory-Efficient Anomaly Detection in Streaming Heterogeneous Graphs. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1035-1044.", + "[146] Qinghua Mao, Xi Lin, Wenchao Xu, Yuxin Qi, Xiu Su, Gaolei Li, and Jianhua Li. 2025. FeCoGraph: Label-Aware Federated Graph Contrastive Learning for Few-Shot Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025).", + "[147] Yuyi Mao, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B Letaief. 2017. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Communications Surveys and Tutorials 19, 4 (2017), 2322-2358.", + "[148] Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building A Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19, 2 (1993), 313-330.", + "[149] Ariana Martino, Michael Iannelli, and Coleen Truong. 2023. Knowledge Injection to Counter Large Language Model (LLM) Hallucination. In European Semantic Web Conference. Springer, 182-185.", + "[150] Ines Martins, Joao S Resende, Patricia R Sousa, Simao Silva, Luis Antunes, and Joao Gama. 2022. Host-based IDS: A Review and Open Issues of An Anomaly Detection System in IoT. Future Generation Computer Systems 133 (2022), 95-113.", + "[151] Weibin Meng, Ying Liu, Yichen Zhu, Shenglin Zhang, Dan Pei, Yuqing Liu, Yihao Chen, Ruizhi Zhang, Shimin Tao, Pei Sun, et al. 2019. LogAnomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs. In Proceedings of the International Joint Conference on Artificial Intelligence, Vol. 19. 4739-4745.", + "[152] Noor Michael, Jaron Mink, Jason Liu, Sneha Gaur, Wajih Ul Hassan, and Adam Bates. 2020. On the Forensic Validity of Approximated Audit Logs. In Proceedings of the 36th Annual Computer Security Applications Conference. 189-202.", + "[153] Microsoft. [n.d]. Event Tracing - Win32 apps. https://learn.microsoft.com/en-us/windows/win32/etw/event-tracing-portal. 2020.", + "[154] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781 (2013).", + "[155] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. Advances in Neural Information Processing Systems 26 (2013).", + "[156] Sadegh M Milajerdi, Birhanu Eshete, Rigel Gjomemo, and VN Venkatakrishnan. 2019. Poirot: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1795-1812.", + "[157] Sadegh M Milajerdi, Rigel Gjomemo, Birhanu Eshete, Ramachandran Sekar, and VN Venkatakrishnan. 2019. Holmes: Real-time APT Detection through Correlation of Suspicious Information Flows. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. IEEE, 1137-1152.", + "[158] Seyed Mohammad Mehdi Mirnajafizadeh, Ashwin Raam Sethuram, David Mohaisen, DaeHun Nyang, and Rhongho Jang. 2024. Enhancing Network Attack Detection with Distributed and In-Network Data Collection System. In Proceedings of the 33rd USENIX Security Symposium. 5161-5178." + ], + "bbox": [ + 90, + 119, + 907, + 909 + ], + "page_idx": 32 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 32 + }, + { + "type": "page_number", + "text": "1:33", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 32 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 945 + ], + "page_idx": 32 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[159] Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Proceedings of the Network and Distributed Systems Security Symposium (2018).", + "[160] Kunal Mukherjee and Murat Kantarcioglu. 2025. LLM-driven Provenance Forensics for Threat Investigation and Detection. arXiv preprint arXiv:2508.21323 (2025).", + "[161] Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, James Wei, Feng Chen, Muhyun Kim, Murat Kantarcioglu, and Kangkook Jee. 2023. Evading Provenance-Based ML Detectors with Adversarial System Actions. In Proceedings of the 32nd USENIX Security Symposium. 1199-1216.", + "[162] Muhammad Hassan Nasir, Salman A Khan, Muhammad Mubashir Khan, and Mahawish Fatima. 2022. Swarm Intelligence Inspired Intrusion Detection Systems—A Systematic Literature Review. Computer Networks 205 (2022), 108708.", + "[163] Mostafa Nassar, Nirmeen A El-Bahnasawy, HossamEl-Din H Ahmed, Adel A Saleeb, and Fathi E Abd El-Samie. 2019. Network Intrusion Detection, Literature Review and Some Techniques Comparison. In Proceedings of the 2019 15th International Computer Engineering Conference. IEEE, 62-71.", + "[164] Alexander Tobias Neumann, Yue Yin, Sulayman Sowe, Stefan Decker, and Matthias Jarke. 2024. An LLM-Driven Chatbot in Higher Education for Databases and Information Systems. IEEE Transactions on Education (2024).", + "[165] Zhibin Ni, Pan Fan, Shengzhuo Dai, Bo Zhang, Hai Wan, and Xibin Zhao. 2025. FG-CIBGC: A Unified Framework for Fine-Grained and Class-Incremental Behavior Graph Classification. In Proceedings of the Web Conference.", + "[166] Weina Niu, Zhenqi Yu, Zimu Li, Beibei Li, Runzi Zhang, and Xiaosong Zhang. 2022. LogTracer: Efficient Anomaly Tracing Combining System Log Detection and Provenance Graph. In Proceedings of the IEEE Global Communications Conference. IEEE, 3356-3361.", + "[167] Christine Nussbaum, Sascha Frühholz, and Stefan R Schweinberger. 2025. Understanding Voice Naturalness. Trends in Cognitive Sciences (2025).", + "[168] Connected Papers. 2020. Connected Papers: A Visual Tool for Researchers. https://wwwconnectedpapers.com", + "[169] Nohil Park, Heeseung Kim, Che Hyun Lee, Jooyoung Choi, Jiheum Yeom, and Sungroh Yoon. 2025. NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1-5.", + "[170] Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer, and Jean Bacon. 2017. Practical Whole-System Provenance Capture. In Proceedings of the 2017 Symposium on Cloud Computing. 405-418.", + "[171] Igor Pavlov. 2001. LZMA SDK (Software Development Kit). https://www.7-zip.org/", + "[172] Cheng Peng, Xi Yang, Aokun Chen, Kaleb E Smith, Nima PourNejatian, Anthony B Costa, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al. 2023. A Study of Generative Large Language Model For Medical Research and Healthcare. NPJ Digital Medicine 6, 1 (2023), 210.", + "[173] Yihao Peng, Tongxin Zhang, Jieshao Lai, Yuxuan Zhang, Yiming Wu, Hai Wan, and Xibin Zhao. 2025. AutoLabel: Automated Fine-Grained Log Labeling for Cyber Attack Dataset Generation. In 34th USENIX Security Symposium (USENIX Security 25). 547-566.", + "[174] Prometheus. 2014. Prometheus - Monitoring System & Time Series Database. https://prometheus.io/", + "[175] Jiaxing Qi, Zhongzhi Luan, Shaohan Huang, Carol Fung, Hailong Yang, and Depei Qian. 2023. SpikeLog: Log-based Anomaly Detection via Potential-Assisted Spiking Neuron Network. IEEE Transactions on Knowledge and Data Engineering 36, 12 (2023), 9322-9335.", + "[176] Wei Qiao, Yebo Feng, Teng Li, Zhuo Ma, Yulong Shen, JianFeng Ma, and Yang Liu. 2025. Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning. In Proceedings of the 2025 on ACM SIGSAC Conference on Computer and Communications Security.", + "[177] QuickLZ. 2006. QuickLZ: Fastest Compression Library. http://wwwquicklz.com/", + "[178] Alec Radford. 2018. Improving Language Understanding by Generative Pre-Training. (2018).", + "[179] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with A Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1-67.", + "[180] Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. 2016. On the \"Naturalness\" of Buggy Code. In Proceedings of the 38th International Conference on Software Engineering. 428-439.", + "[181] Bace Rebecca and Peter Mell. 2001. Intrusion Detection Systems. National Institute of Standards and Technology, Special Publication (2001).", + "[182] Mati Ur Rehman, Hadi Ahmadi, and Wajih Ul Hassan. 2024. FLASH: A Comprehensive Approach to Intrusion Detection via Provenance Graph Representation Learning. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE Computer Society, 139-139.", + "[183] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, Robust and Controllable Text to Speech. Advances in Neural Information Processing Systems 32 (2019)." + ], + "bbox": [ + 90, + 119, + 907, + 909 + ], + "page_idx": 33 + }, + { + "type": "page_number", + "text": "1:34", + "bbox": [ + 92, + 84, + 119, + 94 + ], + "page_idx": 33 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 33 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 512, + 945 + ], + "page_idx": 33 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[184] Andy Riddle, Kim Westfall, and Adam Bates. 2023. Atlasv2: Atlas attack engagements, version 2. arXiv preprint arXiv:2401.01341 (2023).", + "[185] Malajah Roberts, Jonathan Anderson, William Delgado, Richard Johnson, and Lawrence Spencer. 2024. Extending Contextual Length and World Knowledge Generalization in Large Language Models. (2024).", + "[186] Kirk Rodrigues, Yu Luo, and Ding Yuan. 2021. CLP: Efficient and Scalable Search on Compressed Text Logs. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. 183-198.", + "[187] Ronald Rosenfeld. 2000. Two Decades of Statistical Language Modeling: Where Do We Go from Here? Proceedings of the IEEE 88, 8 (2000), 1270-1278.", + "[188] Tejaswini S and Azra Nasreen. 2021. Survey on Online Log Parsing. Regular issue (2021). https://api-semanticscholar.org/CorpusID:236861650", + "[189] Vijay Samuel. 2018. Monitoring Anything and Everything with Beats at eBay.(2018). (2018).", + "[190] Michael Schindler. 1999. SZIP Compression. http://www.compressconsult.com/szip/", + "[191] Frank Schwellinger. 2008. Ocamyd: A File (De-)Compressor Based on the DMC Algorithm. https://www.geocities.ws/ocamyd/", + "[192] Issam Sedki, Abdelwahab Hamou-Lhadj, Otmane Ait-Mohamed, and Mohammed A Shehab. 2022. An Effective Approach for Parsing Large Log Files. In Proceedings of the 2022 IEEE International Conference on Software Maintenance and Evolution. IEEE, 1-12.", + "[193] R Sekar, Hanke Kimm, and Rohit Aich. 2024. eAudit: A Fast, Scalable and Deployable Audit Data Collection System. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 3571-3589.", + "[194] Julian Seward. 1996. bzip2: A High-Quality Data Compressor. http://www.bzip.org/", + "[195] Claude E Shannon. 1948. A Mathematical Theory of Communication. The Bell System Technical Journal 27, 3 (1948), 379-423.", + "[196] Claude E Shannon. 1951. The Redundancy of English. In Cybernetics; Transactions of the 7th Conference, New York: Josiah Macy, Jr. Foundation. 248-272.", + "[197] Madhukar Shrestha, Yonghyun Kim, Jeehyun Oh, Junghwan Rhee, Yung Ryn Choe, Fei Zuo, Myungah Park, and Gang Qian. 2023. ProvSec: Open Cybersecurity System Provenance Analysis Benchmark Dataset with Labels. International Journal of Networked and Distributed Computing 11, 2 (2023), 112-123.", + "[198] Rakesh Shrestha, Atefeh Omidkar, Sajjad Ahmadi Roudi, Robert Abbas, and Shiho Kim. 2021. Machine-Learning-Enabled Intrusion Detection System for Cellular Connected UAV Networks. *Electronics* 10, 13 (2021), 1549.", + "[199] Zhuoxue Song, Ziming Zhao, Fan Zhang, Gang Xiong, Guang Cheng, Xinjie Zhao, Shize Guo, and Binbin Chen. 2022. I²RNN: An Incremental and Interpretable Recurrent Neural Network for Encrypted Traffic Classification. IEEE Transactions on Dependable and Secure Computing (2022).", + "[200] Manolis Stamatogiannakis, Paul Groth, and Herbert Bos. 2015. Looking Inside the Black-Box: Capturing Data Provenance Using Dynamic Instrumentation. In Provenance and Annotation of Data and Processes: 5th International Provenance and Annotation Workshop, IPAW 2014, Cologne, Germany, June 9-13, 2014. Revised Selected Papers 5. Springer, 155-167.", + "[201] Branka Stojanovic, Katharina Hofer-Schmitz, and Ulrike Kleb. 2020. APT Datasets and Attack Modeling for Automated Detection Methods: A Review. Computer Security 92 (2020), 101734. https://apisemantic scholar.org/CorpusID:213320542", + "[202] Hongbin Sun, Su Wang, Zhiliang Wang, Zheyu Jiang, Dongqi Han, and Jiahai Yang. 2024. AudiTrim: A Real-time, General, Efficient, and Low-overhead Data Compaction System for Intrusion Detection. In Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses. 263-277.", + "[203] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntellicodeCompose: Code Generation Using Transformer. In Proceedings of the 28th ACM joint meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1433-1443.", + "[204] Dan Tang, Yudong Yan, Chenjun Gao, Wei Liang, and Wenqiang Jin. 2023. LtRFT: Mitigate the Low-Rate Data Plane DDoS Attack with Learning-to-Rank Enabled Flow Tables. IEEE Transactions on Information Forensics and Security 18 (2023), 3143-3157.", + "[205] Yutao Tang, Ding Li, Zhichun Li, Mu Zhang, Kangkook Jee, Xusheng Xiao, Zhenyu Wu, Junghwan Rhee, Fengyuan Xu, and Qun Li. 2018. NodeMerge: Template Based Efficient Data Reduction for Big-Data Causality Analysis. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 1324–1337.", + "[206] Joerg Thalheim, Pramod Bhatotia, and Christof Fetzer. 2016. Inspector: Data Provenance Using Intel Processor Trace (PT). In Proceedings of the 2016 IEEE 36th International Conference on Distributed Computing Systems. IEEE, 25-34.", + "[207] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language Models for Dialog Applications. arXiv preprint arXiv:2201.08239 (2022).", + "[208] ThoughtWorks. 2004. Selenium RC. http://www.seleniumhq.org/projects/remote-control/" + ], + "bbox": [ + 90, + 119, + 907, + 909 + ], + "page_idx": 34 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 34 + }, + { + "type": "page_number", + "text": "1:35", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 34 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 945 + ], + "page_idx": 34 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[209] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971 (2023).", + "[210] Aqua Tracee. 2022. Runtime eBPF Threat Detection Engine.", + "[211] Devharsh Trivedi, Aymen Boudguiga, Nesrine Kaaniche, and Nikos Triandopoulos. 2023. SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation. Cryptography 7, 4 (2023), 52.", + "[212] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Processing Systems 30 (2017).", + "[213] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017. Graph Attention Networks. stat 1050, 20 (2017), 10-48550.", + "[214] Arthur Vervaet, Raja Chiky, and Mar Callau-Zori. 2021. USTEP: Unfixed Search Tree for Efficient Log Parsing. In Proceedings of the 2021 IEEE International Conference on Data Mining. IEEE, 659-668.", + "[215] David Wagner and Paolo Soto. 2002. Mimicry Attacks on Host-Based Intrusion Detection Systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security. 255-264.", + "[216] Qi Wang, Wajih Ul Hassan, Ding Li, Kangkook Jee, Xiao Yu, Kexuan Zou, Junghwan Rhee, Zhengzhang Chen, Wei Cheng, Carl A Gunter, et al. 2020. You Are What You Do: Hunting Stealthy Malware via Data Provenance Analysis. In Proceedings of the Network and Distributed System Security Symposium.", + "[217] Rui Wang, Devin Gibson, Kirk Rodrigues, Yu Luo, Yun Zhang, Kaibo Wang, Yupeng Fu, Ting Chen, and Ding Yuan. 2024. $\\mu$ Slope: High Compression and Fast Search on Semi-Structured Logs. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation. 529-544.", + "[218] Ruihua Wang, Yihao Peng, Yilun Sun, Xuancheng Zhang, Hai Wan, and Xibin Zhao. 2023. TeSec: Accurate Server-Side Attack Investigation for Web Applications. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. IEEE, 2799-2816.", + "[219] Su Wang, Zhiliang Wang, Tao Zhou, Hongbin Sun, Xia Yin, Dongqi Han, Han Zhang, Xingang Shi, and Jiahai Yang. 2022. threaTrace: Detecting and Tracing Host-Based Threats in Node Level Through Provenance Graph Learning. IEEE Transactions on Information Forensics and Security 17 (2022), 3972-3987.", + "[220] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent Abilities of Large Language Models. arXiv preprint arXiv:2206.07682 (2022).", + "[221] Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, and Xiaofeng Zou. 2024. HEN: A Novel Hybrid Explainable Neural Network Based Framework for Robust Network Intrusion Detection. Science China Information Sciences 67, 7 (2024), 170304.", + "[222] Cong Wu, Jianfei Sun, Jing Chen, Mamoun Alazab, Yang Liu, and Yang Xiang. 2025. TCG-IDS: Robust Network Intrusion Detection via Temporal Contrastive Graph Learning. IEEE Transactions on Information Forensics and Security (2025).", + "[223] Weiheng Wu, Wei Qiao, Teng Li, Yebo Feng, Zhuo Ma, Jianfeng Ma, and Yang Liu. 2025. ProvX: Generating Counterfactual-Driven Attack Explanations for Provenance-Based Detection. arXiv preprint arXiv:2508.06073 (2025).", + "[224] Yafeng Wu, Yulai Xie, Xuelong Liao, Pan Zhou, Dan Feng, Lin Wu, Xuan Li, Avani Wildani, and Darrell Long. 2022. Paradise: Real-Time, Generalized, and Distributed Provenance-Based Intrusion Detection. IEEE Transactions on Dependable and Secure Computing 20, 2 (2022), 1624-1640.", + "[225] Yixuan Wu, Long Zhang, Lin Yang, Feng Yang, Linru Ma, Zhoumin Lu, and Wen Jiang. 2025. Intrusion Detection for Internet of Things: An Anchor Graph Clustering Approach. IEEE Transactions on Information Forensics and Security (2025).", + "[226] Tong Xiao, Zhe Quan, Zhi-Jie Wang, Kaiqi Zhao, Xiangke Liao, Huang Huang, Yunfei Du, and Kenli Li. 2023. LPV: A Log Parsing Framework Based on Vectorization. IEEE Transactions on Network and Service Management 20, 3 (2023), 2711-2725.", + "[227] Yulai Xie, Dan Feng, Yuchong Hu, Yan Li, Staunton Sample, and Darrell Long. 2018. Pagoda: A Hybrid Approach to Enable Efficient Real-Time Provenance Based Intrusion Detection in Big Data Environments. IEEE Transactions on Dependable and Secure Computing 17, 6 (2018), 1283-1296.", + "[228] Yulai Xie, Kiran-Kumar Muniswamy-Reddy, Darrell DE Long, Ahmed Amer, Dan Feng, and Zhipeng Tan. 2011. Compressing Provenance Graphs. In Proceedings of the 3rd USENIX Workshop on the Theory and Practice of Provenance.", + "[229] Junjielong Xu, Qiuai Fu, Zhourui xing Zhu, Yutong Cheng, Zhijing Li, Yuchi Ma, and Pinjia He. 2023. Hue: A User-Adaptive Parser for Hybrid Logs. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 413-424.", + "[230] Jiacen Xu, Xiaokui Shu, and Zhou Li. 2024. Understanding and Bridging the Gap between Unsupervised Network Representation Learning and Security Analytics. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3590-3608." + ], + "bbox": [ + 90, + 119, + 907, + 908 + ], + "page_idx": 35 + }, + { + "type": "page_number", + "text": "1:36", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 35 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 35 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 512, + 945 + ], + "page_idx": 35 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[231] Wei Xu, Ling Huang, Armando Fox, David Patterson, and Michael I Jordan. 2009. Detecting Large-scale System Problems by Mining Console Logs. In Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles. 117-132.", + "[232] Zhiqiang Xu, Pengcheng Fang, Changlin Liu, Xusheng Xiao, Yu Wen, and Dan Meng. 2022. DepComm: Graph Summarization on System Audit Logs for Attack Investigation. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 540-557.", + "[233] Zhiwei Xu, Shaohua Qiang, Dinghong Song, Min Zhou, Hai Wan, Xibin Zhao, Ping Luo, and Hongyu Zhang. 2024. DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1-12.", + "[234] Zhang Xu, Zhenyu Wu, Zhichun Li, Kangkook Jee, Junghwan Rhee, Xusheng Xiao, Fengyuan Xu, Haining Wang, and Guofei Jiang. 2016. High Fidelity Data Reduction for Big Data Security Dexterity Analyses. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 504-516.", + "[235] Zhiwei Xu, Min Zhou, Xibin Zhao, Yang Chen, Xi Cheng, and Hongyu Zhang. 2023. xASTNN: Improved Code Representations for Industrial Practice. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1727-1738.", + "[236] Yu Xue, Bernard-marie Onzo, and Ferrante Neri. 2021. Intrusion Detection System Based on an Updated ANN Model. In Advances in Swarm Intelligence: 12th International Conference, ICSI 2021, Qingdao, China, July 17-21, 2021, Proceedings, Part II 12. Springer, 472-479.", + "[237] Fan Yang, Jiacen Xu, Chunlin Xiong, Zhou Li, and Kehuan Zhang. 2023. ProGrapher: An Anomaly Detection System based on Provenance Graph Embedding. In Proceedings of the 32nd USENIX Security Symposium. 4355-4372.", + "[238] Lin Yang, Junjie Chen, Zan Wang, Weijing Wang, Jiajun Jiang, Xuyuan Dong, and Wenbin Zhang. 2021. Semi-Supervised Log-Based Anomaly Detection via Probabilistic Label Estimation. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 1448-1460.", + "[239] Runqing Yang, Shiqing Ma, Haitao Xu, Xiangyu Zhang, and Yan Chen. 2020. UIScope: Accurate, Instrumentation-free, and Visible Attack Investigation for GUI Applications. In Proceedings of the Network and Distributed Systems Security Symposium.", + "[240] Zhaohui Yang, Wei Xu, Le Liang, Yuanhao Cui, Zhijin Qin, and Mérouane Debbah. 2025. On Privacy, Security, and Trustworthiness in Distributed Wireless Large AI Models. Science China Information Sciences 68, 7 (2025), 1-15.", + "[241] Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Yu-Yang Liu, and Li Yuan. 2023. LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. arXiv preprint arXiv:2310.01469 (2023).", + "[242] Kundi Yao, Heng Li, Weiyi Shang, and Ahmed E Hassan. 2020. A Study of the Performance of General Compressors on Log Files. Empirical Software Engineering 25 (2020), 3043-3085.", + "[243] Kundi Yao, Mohammed Sayagh, Weiyi Shang, and Ahmed E Hassan. 2021. Improving State-of-the-Art Compression Techniques for Log Management Tools. IEEE Transactions on Software Engineering 48, 8 (2021), 2748-2760.", + "[244] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. *High-Confidence Computing* (2024), 100211.", + "[245] Heng Yin, Dawn Song, Manuel Egele, Christopher Kruegel, and Engin Kirda. 2007. Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 116-127.", + "[246] Kun Yin, Meng Yan, Ling Xu, Zhou Xu, Zhao Li, Dan Yang, and Xiaohong Zhang. 2020. Improving Log-Based Anomaly Detection with Component-Aware Analysis. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution. IEEE, 667-671.", + "[247] Guangba Yu, Pengfei Chen, Pairui Li, Tianjun Weng, Haibing Zheng, Yuetang Deng, and Zibin Zheng. 2023. LogReducer: Identify and Reduce Log Hotspots in Kernel on the Fly. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 1763-1775.", + "[248] Le Yu, Shiqing Ma, Zhuo Zhang, Guanhong Tao, Xiangyu Zhang, Dongyan Xu, Vincent E Urias, Han Wei Lin, Gabriela F Ciocarlie, Vinod Yegneswaran, et al. 2021. ALchemist: Fusing Application and Audit Logs for Precise Attack Provenance without Instrumentation. In Proceedings of the Network and Distributed System Security Symposium.", + "[249] Siyu Yu, Yifan Wu, Ying Li, and Pinjia He. 2024. Unlocking the Power of Numbers: Log Compression via Numeric Token Parsing. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 919-930.", + "[250] Jun Zengy, Xiang Wang, Jiahao Liu, Yinfang Chen, Zhenkai Liang, Tat-Seng Chua, and Zheng Leong Chua. 2022. ShadeWatcher: Recommendation-Guided Cyber Threat Analysis Using System Audit Records. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 489-506.", + "[251] Chao Zha, Zhiyu Wang, Yifei Fan, Bing Bai, Yinjie Zhang, Sainan Shi, and Ruyun Zhang. 2025. A-NIDS: Adaptive Network Intrusion Detection System based on Clustering and Stacked CTGAN. IEEE Transactions on Information Forensics and Security (2025)." + ], + "bbox": [ + 90, + 119, + 907, + 909 + ], + "page_idx": 36 + }, + { + "type": "header", + "text": "Deep Learning-based Intrusion Detection Systems: A Survey", + "bbox": [ + 90, + 83, + 502, + 95 + ], + "page_idx": 36 + }, + { + "type": "page_number", + "text": "1:37", + "bbox": [ + 876, + 84, + 905, + 94 + ], + "page_idx": 36 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 479, + 933, + 905, + 947 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[252] Bo Zhang, Yansong Gao, Changlong Yu, Boyu Kuang, Zhi Zhang, Hyoungshick Kim, and Anmin Fu. 2025. TAPAS: An Efficient Online APT Detection with Task-guided Process Provenance Graph Segmentation and Analysis. In Proceedings of the USENIX Security Symposium. 607-624.", + "[253] Pei Zhang, Fangzhou He, Han Zhang, Jiankun Hu, Xiaohong Huang, Jilong Wang, Xia Yin, Huahong Zhu, and Yahui Li. 2023. Real-Time Malicious Traffic Detection with Online Isolation Forest over SD-WAN. IEEE Transactions on Information Forensics and Security 18 (2023), 2076-2090.", + "[254] Shenglin Zhang, Yuhe Ji, Jiaqi Luan, Xiaohui Nie, Ziang Chen, Minghua Ma, Yongqian Sun, and Dan Pei. 2024. End-to-End Automl for Unsupervised Log Anomaly Detection. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 1680–1692.", + "[255] Tianzhu Zhang, Han Qiu, Gabriele Castellano, Myriana Rifai, Chung Shue Chen, and Fabio Pianese. 2023. System Log Parsing: A Survey. IEEE Transactions on Knowledge and Data Engineering 35, 8 (2023), 8596-8614. https://doi.org/10.1109/TKDE.2022.3222417", + "[256] Tianye Zhang, Xumeng Wang, Zongzhuang Li, Fangzhou Guo, Yuxin Ma, and Wei Chen. 2017. A Survey of Network Anomaly Visualization. Science China Information Sciences 60, 12 (2017), 121101.", + "[257] Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, et al. 2019. Robust Log-Based Anomaly Detection on Unstable Log Data. In Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. 807-817.", + "[258] Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. 2024. Revolutionizing Finance with LLMs: An Overview of Applications and Insights. arXiv preprint arXiv:2401.11641 (2024).", + "[259] Jianjin Zhao, Qi Li, Zewei Han, Junsong Fu, Guoshun Nan, Meng Shen, and Bharat K Bhargava. 2024. ReTrial: Robust Encrypted Malicious Traffic Detection via Discriminative Relation Incorporation and Misleading Relation Correction. IEEE Transactions on Information Forensics and Security (2024).", + "[260] Ruijie Zhao, Xianwen Deng, Zhicong Yan, Jun Ma, Zhi Xue, and Yijun Wang. 2022. MT-FlowFormer: A Semi-Supervised Flow Transformer for Encrypted Traffic Classification. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2576-2584.", + "[261] Ying Zhao, FangFang Zhou, XiaoPing Fan, Xing Liang, and YongGang Liu. 2013. IDSRadar: A Real-Time Visualization Framework for IDS Alerts. Science China Information Sciences 56, 8 (2013), 1-12.", + "[262] Ziming Zhao, Zhaoxuan Li, Jialun Jiang, Fengyuan Yu, Fan Zhang, Congyuan Xu, Xinjie Zhao, Rui Zhang, and Shize Guo. 2022. ERNN: Error-Resilient RNN for Encrypted Traffic Detection Towards Network-Induced Phenomena. IEEE Transactions on Dependable and Secure Computing (2022).", + "[263] Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhuoxue Song, and Zhaoxuan Li. 2024. Effective DDoS Mitigation via ML-Driven In-Network Traffic Shaping. IEEE Transactions on Dependable and Secure Computing 21, 4 (2024), 4271-4289.", + "[264] Ying Zhong, Zhiliang Wang, Xingang Shi, Jiahai Yang, and Keqin Li. 2024. RFG-HELAD: A Robust Fine-Grained Network Traffic Anomaly Detection Model Based on Heterogeneous Ensemble Learning. IEEE Transactions on Information Forensics and Security (2024).", + "[265] Junwei Zhou, Shaowen Ying, Shulan Wang, Dongdong Zhao, Jianwen Xiang, Kaitai Liang, and Peng Liu. 2025. LogDLR: Unsupervised Cross-System Log Anomaly Detection Through Domain-Invariant Latent Representation. IEEE Transactions on Dependable and Secure Computing (2025).", + "[266] Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R Lyu. 2023. Loghub: A Large Collection of System Log Datasets for AI-Driven Log Analytics. In Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering. IEEE, 355-366.", + "[267] Tiantian Zhu, Jiayu Wang, Linqi Ruan, Chunlin Xiong, Jinkai Yu, Yaosheng Li, Yan Chen, Mingqi Lv, and Tieming Chen. 2021. General, Efficient, and Real-Time Data Compaction Strategy for APT Forensic Analysis. IEEE Transactions on Information Forensics and Security 16 (2021), 3312-3325.", + "[268] Tiantian Zhu, Jinkai Yu, Chunlin Xiong, Wenrui Cheng, Qixuan Yuan, Jie Ying, Tieming Chen, Jiabo Zhang, Mingqi Lv, Yan Chen, et al. 2023. APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. IEEE Transactions on Dependable and Secure Computing 20, 6 (2023), 5247-5264.", + "[269] Yao Zhu, LI Zhenyuan, Yangyang Wei, and Shouling Ji. 2025. The Case for Learned Provenance-based System Behavior Baseline. In Forty-second International Conference on Machine Learning.", + "[270] Michael Zipperle, Florian Gottwalt, Elizabeth Chang, and Tharam S. Dillon. 2022. Provenance-based Intrusion Detection Systems: A Survey. ACM Computing Surveys 55 (2022), 1 - 36. https://api-semanticscholar.org/CorpusID:249579087" + ], + "bbox": [ + 90, + 119, + 907, + 879 + ], + "page_idx": 37 + }, + { + "type": "page_number", + "text": "1:38", + "bbox": [ + 90, + 84, + 119, + 94 + ], + "page_idx": 37 + }, + { + "type": "header", + "text": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao", + "bbox": [ + 236, + 83, + 907, + 95 + ], + "page_idx": 37 + }, + { + "type": "footer", + "text": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025.", + "bbox": [ + 86, + 933, + 512, + 945 + ], + "page_idx": 37 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_model.json b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..beeb97dc5912cf4ec0d629c6dab3f3990b2d1a9c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_model.json @@ -0,0 +1,7118 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.031, + 0.256, + 0.074, + 0.741 + ], + "angle": 270, + "content": "arXiv:2504.07839v3 [cs.CR] 13 Oct 2025" + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.114, + 0.904, + 0.137 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.15, + 0.908, + 0.186 + ], + "angle": 0, + "content": "ZHIWEI XU, YUJUAN WU, SHIHENG WANG, JIABAO GAO, TIAN QIU, ZIQI WANG, HAI WAN, and XIBIN ZHAO*, KLISS, BNRist, School of Software, Tsinghua University, China" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.194, + 0.911, + 0.346 + ], + "angle": 0, + "content": "Intrusion Detection Systems (IDS) have long been a hot topic in the cybersecurity community. In recent years, with the introduction of deep learning (DL) techniques, IDS have made great progress due to their increasing generalizability. The rationale behind this is that by learning the underlying patterns of known system behaviors, IDS detection can be generalized to intrusions that exploit zero-day vulnerabilities. In this survey, we refer to this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this survey systematically reviews all the stages of DL-IDS, including data collection, log storage, log parsing, graph summarization, attack detection, and attack investigation. To accommodate current researchers, a section describing the publicly available benchmark datasets is included. This survey further discusses current challenges and potential future research directions, aiming to help researchers understand the basic ideas and visions of DL-IDS research, as well as to motivate their research interests." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.353, + 0.908, + 0.385 + ], + "angle": 0, + "content": "CCS Concepts: \\(\\cdot\\) Security and privacy \\(\\rightarrow\\) Intrusion detection systems; \\(\\cdot\\) Computing methodologies \\(\\rightarrow\\) Machine learning; \\(\\cdot\\) General and reference \\(\\rightarrow\\) Surveys and overviews." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.39, + 0.744, + 0.406 + ], + "angle": 0, + "content": "Additional Key Words and Phrases: Intrusion detection systems, deep learning, survey" + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.411, + 0.297, + 0.424 + ], + "angle": 0, + "content": "ACM Reference Format:" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.425, + 0.91, + 0.457 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao. 2025. Deep Learning-based Intrusion Detection Systems: A Survey. J. ACM 1, 1, Article 1 (October 2025), 38 pages." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.469, + 0.287, + 0.484 + ], + "angle": 0, + "content": "1 INTRODUCTION" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.49, + 0.91, + 0.59 + ], + "angle": 0, + "content": "The promising Internet of Everything connects people, processes, data, and things through the Internet [51], bringing convenience and efficiency to the world. Yet its inevitable security vulnerabilities could be exploited by deliberate attackers. With increasingly sophisticated attack methods such as Advanced Persistent Threat (APT), the attackers are in a threatening position to sabotage network systems or steal sensitive data. The detection of intrusions, particularly based on DL, has consequently been a prominent topic in the cybersecurity community." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.59, + 0.911, + 0.723 + ], + "angle": 0, + "content": "The automated system for detecting intrusions is known as IDS. The limitations of IDS may result in terrible damage to enterprises. One example is the recent Colonial Pipeline Ransomware Attack [16]. In April 2021, the hacking group DarkSide launched a ransomware attack on Colonial Pipeline, the biggest oil pipeline company in the United States, using an unused VPN account. Due to this attack, 5,500 miles of transportation pipelines were forced to shut down, affecting nearly \\(45\\%\\) of the fuel supply on the Eastern Coast. The Colonial Pipeline paid $4.4 million ransom money, in addition to the theft of over 100 GB of data. If the malware intrusion can be detected in time, the influence of this attack can be greatly mitigated or even eliminated." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.736, + 0.562, + 0.752 + ], + "angle": 0, + "content": "1.1 Tough but Bright Intrusion Detection System" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.756, + 0.908, + 0.791 + ], + "angle": 0, + "content": "IDS have been increasingly challenged to effectively deal with intrusions for decades. It is noted in Figure 1(a) that the number of \\(\\mathrm{CVE}^1\\) records has presented an accelerating uptrend, especially" + }, + { + "type": "page_footnote", + "bbox": [ + 0.088, + 0.798, + 0.369, + 0.812 + ], + "angle": 0, + "content": "*Xibin Zhao is the corresponding author." + }, + { + "type": "page_footnote", + "bbox": [ + 0.088, + 0.812, + 0.907, + 0.84 + ], + "angle": 0, + "content": "1Common Vulnerabilities and Exposures (CVE) is a security project for security information sharing and vulnerability management. CVE is a publicly accessible database where each vulnerability has a common name and a unique identifier." + }, + { + "type": "list", + "bbox": [ + 0.088, + 0.798, + 0.907, + 0.84 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.088, + 0.849, + 0.908, + 0.877 + ], + "angle": 0, + "content": "Authors' address: Zhiwei Xu; Yujuan Wu; Shiheng Wang; Jiabao Gao; Tian Qiu; Ziqi Wang; Hai Wan; Xibin Zhao, KLISS, BNRist, School of Software, Tsinghua University, Beijing, China, zxb@tsinghua.edu.cn." + }, + { + "type": "footer", + "bbox": [ + 0.089, + 0.887, + 0.339, + 0.9 + ], + "angle": 0, + "content": "2025.ACM 0004-5411/2025/10-ART1" + }, + { + "type": "footer", + "bbox": [ + 0.089, + 0.901, + 0.352, + 0.915 + ], + "angle": 0, + "content": "https://doi.org/XXXXXXXXXXXXXXXXXX" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.114, + 0.096 + ], + "angle": 0, + "content": "1:2" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.098 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "image", + "bbox": [ + 0.097, + 0.141, + 0.486, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.163, + 0.31, + 0.446, + 0.326 + ], + "angle": 0, + "content": "(a) Trend of CVE records and IDS papers." + }, + { + "type": "image", + "bbox": [ + 0.535, + 0.131, + 0.891, + 0.289 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.31, + 0.846, + 0.325 + ], + "angle": 0, + "content": "(b) Category of CNNVD vulnerabilities." + }, + { + "type": "image_caption", + "bbox": [ + 0.378, + 0.347, + 0.618, + 0.362 + ], + "angle": 0, + "content": "Fig. 1. Recent situation of IDS." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.387, + 0.909, + 0.469 + ], + "angle": 0, + "content": "in 2016, which suffered a sharp rise. After 2016, the number of CVE records stays growing at a high speed, reaching around 30,000 in 2024. Besides, according to the \\(\\mathrm{CNNVD}^2\\) report shown in Figure 1(b), we can observe that almost all (i.e., \\(97.2\\%\\) ) vulnerabilities are medium risk or above, with high and critical risk accounting for \\(40\\%\\) of them. The growing number of vulnerabilities and the large percentage of high-risk vulnerabilities both reveal the tough situation faced by IDS." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.47, + 0.911, + 0.619 + ], + "angle": 0, + "content": "Nevertheless, an interesting observation from Figure 1(a) is that, against the number of CVE records, DL-IDS papers also started to emerge in 2016 and their amount grew year by year subsequently. We can notably find that the growth trend of DL-IDS papers is nearly the same as that of CVE records. The potential reason can be speculated as DL is an effective way for IDS to cope with their tough situation. Borrowing the strong generalizability from DL techniques, DL-IDS detection can be extended to zero-day intrusions that are almost impossible to detect with the traditional DL-IDS. Some studies [219, 237, 250] demonstrate this speculation. In their experiments, DL-IDS are all reported with an achievement of over \\(90\\%\\) detection accuracy while the traditional DL-IDS sometimes only have around \\(50\\%\\) detection accuracy." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.619, + 0.909, + 0.721 + ], + "angle": 0, + "content": "The IDS future is not only tough but also bright with the aid of DL - it is evident that the growth in the number of IDS papers primarily comes from those based on DL techniques. The proportion of DL-IDS papers rises from about \\(0\\%\\) in 2016 to a very high \\(65.7\\%\\) in 2024. This phenomenon reflects the great interests and visions of the cybersecurity community in DL-IDS. To date, the DL-IDS development has almost reached a decade, and thus, it is time, and also essential, to revisit how DL and IDS interact, identify emerging trends, and guide future research directions." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.732, + 0.434, + 0.748 + ], + "angle": 0, + "content": "1.2 Related Surveys and Our Scope" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.752, + 0.91, + 0.837 + ], + "angle": 0, + "content": "Unfortunately, none of the related surveys in the last decade have systematically investigated DL-IDS. On one hand, some related surveys may only focus on a few parts of DL-IDS, such as log parsers [138, 188, 255], datasets [201], attack modeling [10, 201], and specific DL technique type [17]. On the other hand, while several surveys [21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270] involve some DL-based approaches, they did not review DL-IDS from the perspective of DL particularly." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.844, + 0.908, + 0.879 + ], + "angle": 0, + "content": "Partial Investigation for DL-IDS. The surveys [10, 138, 188, 201, 255] are the typical example papers describing only a few parts of DL-IDS. Among them, Adel et al. [10] mainly studied various" + }, + { + "type": "page_footnote", + "bbox": [ + 0.088, + 0.887, + 0.908, + 0.916 + ], + "angle": 0, + "content": "2Chinese National Vulnerability Database (CNNVD) is a Chinese national database that catalogs security vulnerabilities in software and hardware products. CNNVD also provides unique identifiers and descriptions similar to CVE." + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.515, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.885, + 0.085, + 0.908, + 0.096 + ], + "angle": 0, + "content": "1:3" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.118, + 0.911, + 0.253 + ], + "angle": 0, + "content": "techniques and solutions that were tailored to APT attacks, as well as discussed where to make the APT detection framework smart. Scott et al. [138] and Tejaswini et al. [188] dually discussed online log parsers and their applications for anomaly detection. Branka et al. [201] review APT datasets and their creation, along with feature engineering in attack modeling. Zhang et al. [255] created an exhaustive taxonomy of system log parsers and empirically analyzed the critical performance and operational features of 17 open-source log parsers. Tristan et al. [17] focused on the applications of graph neural networks (GNNs) to IDS. For DL-IDS, all the above surveys are obviously insufficient to advance research understanding and provide theoretical suggestions." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.261, + 0.911, + 0.394 + ], + "angle": 0, + "content": "Different Perspectives from DL-IDS. Another type of existing surveys involved DL-IDS but studied them from the other perspectives [4, 21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270]. Specifically, the surveys [105, 128] aim to give an elaborate image of IDS and comprehensively explain methods from signature checking to anomaly detection algorithms. Originating from log data, the survey [83] presented a detailed overview of automated log analysis for reliability engineering and introduced three tasks including anomaly detection, failure prediction, and failure diagnosis. In survey [162], Nasir et al. explored the efficacy of swarm intelligence on IDS and highlighted the corresponding challenges in multi-objective IDS problems." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.394, + 0.911, + 0.526 + ], + "angle": 0, + "content": "Additionally, data types inspire and contribute significantly to the related surveys, whose categories include host-based IDS (HIDS) [21, 127, 140, 150, 270] and network-based IDS (NIDS) [4, 163]. Bridges et al. [21] focused on IDS leveraging host data for the enterprise network. Martins et al. [150] brought the HIDS concept to the Internet of Things. As a representative form of data in HIDS, the provenance graph [127, 140, 270] and its reduction techniques [96] were also extensively studied in survey literature. In NIDS, Nassar et al. [163] studied the techniques of network intrusion detection, especially those with machine learning (ML). Ahmad et al. [4] further incorporated ML and DL into their NIDS survey and studied the downstream learning methods duallyedly." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.527, + 0.908, + 0.56 + ], + "angle": 0, + "content": "The above surveys, however, lack investigation and discussion about DL-IDS. DL techniques are only what they cover or involve, rather than the primary focus of their research." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.569, + 0.911, + 0.653 + ], + "angle": 0, + "content": "Scope of Our Survey. Our work distinguishes the related surveys by providing a comprehensive literature review of DL-IDS. From the perspective of DL, our survey elaborates on a common workflow of DL-IDS and introduces the corresponding taxonomies of all modules within this workflow. Moreover, our survey discusses the possible challenges and research visions for DL-IDS, which include many DL-related issues that have not yet been studied by the existing surveys." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.666, + 0.442, + 0.682 + ], + "angle": 0, + "content": "1.3 Contributions and Organization" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.687, + 0.596, + 0.703 + ], + "angle": 0, + "content": "In summary, this survey makes the following contributions:" + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.707, + 0.905, + 0.754 + ], + "angle": 0, + "content": "- Realizing that IDS has made significant progress with the aid of DL over the last decade, we present a thorough survey for DL-IDS, formalizing its definition and clarifying its location among other types of IDS." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.757, + 0.905, + 0.804 + ], + "angle": 0, + "content": "- We outline the common workflow for DL-IDS, consisting of the data management stage and intrusion detection stage. We further systematically illustrate the research advances in all modules of this workflow and innovatively taxonomize the papers based on DL techniques" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.807, + 0.905, + 0.853 + ], + "angle": 0, + "content": "- From the perspective of DL, we discuss the potential challenges and future directions for DL-IDS, especially highlighting the ones unique to DL-IDS for accommodating current researchers." + }, + { + "type": "list", + "bbox": [ + 0.12, + 0.707, + 0.905, + 0.853 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.866, + 0.911, + 0.916 + ], + "angle": 0, + "content": "Survey Structure. Section 2 introduces the survey methodology of this work. Section 3 describes the background knowledge about DL-IDS. Section 4 and Section 5 elaborate the recent research trends on data management stage and intrusion detection stage, respectively. Section 6 illustrates" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.115, + 0.096 + ], + "angle": 0, + "content": "1:4" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.098 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "image", + "bbox": [ + 0.116, + 0.126, + 0.589, + 0.282 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.196, + 0.292, + 0.506, + 0.307 + ], + "angle": 0, + "content": "Fig. 2. Source distribution of references." + }, + { + "type": "image", + "bbox": [ + 0.624, + 0.131, + 0.886, + 0.272 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.673, + 0.292, + 0.833, + 0.308 + ], + "angle": 0, + "content": "Fig. 3. Types of IDS." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.34, + 0.908, + 0.374 + ], + "angle": 0, + "content": "the benchmark datasets and their feature dimensions. Section 7 discusses the visions and challenges for future research. Lastly, the conclusion is presented in Section 8." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.388, + 0.377, + 0.403 + ], + "angle": 0, + "content": "2 SURVEY METHODOLOGY" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.409, + 0.911, + 0.591 + ], + "angle": 0, + "content": "To start our literature review, we selected several popular literature databases, including Web of Science [12], IEEE Xplore [95], and Scopus [50], as the search engine. For search keywords, we determined from generalized terms associated with DL-IDS, such as intrusion detection system, attack investigation, anomaly detection, threat detection, Advanced Persistent Threats, data provenance analysis, forensic analysis, causality analysis, log collection, log compression, log parsing, log storage, and log summarization. Then, we employed Connected Papers [168], a visual tool that assists researchers in finding relevant academic papers, to ensure that we did not overlook the typical related literature. Since the found literature is numerous and rather generalized for the DL-IDS scope, we carefully checked their topics and prioritized only academic papers that are highly related. Finally, all these papers were filtered based on the impact factors of their published journals or academic conferences, leaving us a total of 131 papers." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.592, + 0.91, + 0.675 + ], + "angle": 0, + "content": "We identified a few venues that have published many significant papers in the field of DL-IDS, such as Usenix Security, S&P, CCS, NDSS, TIFS, TDSC, ICSE, ASE, ESEC/FSE, TSE, OSDI, NSDI, EuroSys, SOSP, ATC, ICML, KDD, WWW, TKDE, ICDE, and SCIS. We broadly divide them into five categories: security, software, system, data, and interdisciplinary. The distribution of these papers with their published years is reported in Figure 2." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.69, + 0.275, + 0.705 + ], + "angle": 0, + "content": "3 BACKGROUND" + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.711, + 0.398, + 0.727 + ], + "angle": 0, + "content": "3.1 Intrusion Detection System" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.731, + 0.908, + 0.782 + ], + "angle": 0, + "content": "3.1.1 Definition of IDS. IDS have long been a central issue in the cybersecurity community, whose research can be traced back to the 1990s [181] or even earlier. According to the existing literature [64, 128, 162, 163, 181, 236], IDS can be defined progressively as follows:" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.792, + 0.908, + 0.825 + ], + "angle": 0, + "content": "Definition 3.1. (Intrusion Detection System). Intrusion detection system is a software or hardware system to automate the process of intrusion detection." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.837, + 0.908, + 0.87 + ], + "angle": 0, + "content": "Definition 3.2. (Intrusion Detection). Intrusion detection is the process of monitoring and analyzing the events occurring in a computer or a network for signs of intrusions." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.881, + 0.908, + 0.915 + ], + "angle": 0, + "content": "Definition 3.3. (Intrusion). Intrusion is the attempt to undermine the confidentiality, integrity, and availability of a computer or a network, or to circumvent its security facilities." + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.515, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.886, + 0.085, + 0.908, + 0.096 + ], + "angle": 0, + "content": "1:5" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.119, + 0.907, + 0.169 + ], + "angle": 0, + "content": "3.1.2 Types of IDS. Generally, IDS can be further categorized into various types based on their data sources [270]. Well-known types include NIDS, HIDS, and Provenance-based IDS (PIDS). Figure 3 depicts IDS types, their data sources, and the location of DL-IDS within those IDS types." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.178, + 0.875, + 0.194 + ], + "angle": 0, + "content": "Definition 3.4. (NIDS). NIDS are IDS whose data sources are network traffic between hosts." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.203, + 0.912, + 0.288 + ], + "angle": 0, + "content": "NIDS takes network traffic between hosts as its input. It is usually deployed at the edge or key node of the network, allowing it to secure the whole computer system with limited data. Benefiting from the global perception of the whole computer system, NIDS does well in large-scale multi-host intrusions such as Distributed Denial-of-Service (DDoS) attacks. However, NIDS performs poorly in intra-host intrusions and is difficult to analyze intrusions in the form of encrypted network traffic." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.295, + 0.854, + 0.312 + ], + "angle": 0, + "content": "Definition 3.5. (HIDS). HIDS are IDS whose data sources are system events within hosts." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.32, + 0.909, + 0.419 + ], + "angle": 0, + "content": "HIDS, in contrast, uncovers intrusions through system events of individual hosts. Its data sources include file system changes, system calls, process activities, etc. HIDS can conduct comprehensive detection for a host, and is not affected by encrypted data since the decryption is also performed in the host. Nevertheless, the deployment and maintenance of HIDS is relatively difficult. HIDS should be adapted to hosts of different operating systems and runtime environments. This simultaneously introduces computation overhead for the hosts." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.429, + 0.766, + 0.445 + ], + "angle": 0, + "content": "Definition 3.6. (PIDS). PIDS are HIDS whose data sources are data provenance." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.454, + 0.908, + 0.487 + ], + "angle": 0, + "content": "Definition 3.7. (Data Provenance). Data provenance refers to the origin and the processes that an event has undergone from its creation to its current state." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.496, + 0.909, + 0.579 + ], + "angle": 0, + "content": "PIDS is a subtype of HIDS, particularly referring to HIDS that utilizes data provenance as its data source. Due to analysis in the intact trail of events, PIDS is proven effective in coping with advanced attacks [270]. By performing causality analysis on data provenance, PIDS can significantly reduce false alarms. Yet, data provenance is very expensive to obtain, requiring complicated technical tools for monitoring operating systems, network protocols, and applications." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.588, + 0.91, + 0.621 + ], + "angle": 0, + "content": "Definition 3.8. (DL-IDS.) DL-IDS are IDS that utilize DL techniques to detect intrusions, whose data sources can be network traffic between hosts, system events within hosts, or their combination." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.63, + 0.912, + 0.713 + ], + "angle": 0, + "content": "Unlike the other types of IDS such as NIDS and HIDS are categorized by their data sources, DL-IDS is defined by the techniques used in intrusion detection. As shown in Figure 3, the data source of DL-IDS can be network traffic, system events, or both. Taking advantage of the generalizability of DL techniques, DL-IDS is allowed to handle zero-day attacks precisely and thus become extremely interested in the cybersecurity community recently." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.726, + 0.33, + 0.74 + ], + "angle": 0, + "content": "3.2 Common Workflow" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.747, + 0.91, + 0.78 + ], + "angle": 0, + "content": "Figure 4 depicts the common workflow of DL-IDS. It usually consists of 7 steps: raw data, collection, storage, parsing, summarization, detection, and investigation, which are explained as follows:" + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.783, + 0.905, + 0.814 + ], + "angle": 0, + "content": "- Raw Data is unprocessed data for uncovering attack details or benign system behaviors. The raw data analyzed by cyber experts commonly include network traffic and audit logs." + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.817, + 0.909, + 0.847 + ], + "angle": 0, + "content": "- Collection indicates data collection tools for different systems, such as cloud and cross-platforms, which gather valuable raw data to describe important system behavior scenarios." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.85, + 0.907, + 0.88 + ], + "angle": 0, + "content": "- Storage involves storage and search engines to manage large amounts of collected log data. Log data is labeled with indexes for efficient retrieval." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.883, + 0.905, + 0.914 + ], + "angle": 0, + "content": "- Parsing is the act of analyzing the stored logs and other useful data. It extracts and organizes the underlying information within the data for subsequent processing." + }, + { + "type": "list", + "bbox": [ + 0.12, + 0.783, + 0.909, + 0.914 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.114, + 0.095 + ], + "angle": 0, + "content": "1:6" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.128, + 0.891, + 0.419 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.352, + 0.441, + 0.645, + 0.457 + ], + "angle": 0, + "content": "Fig. 4. Common workflow of DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.481, + 0.907, + 0.514 + ], + "angle": 0, + "content": "- Summarization refers to the operation of summarizing large volumes of parsed data based on its semantics. This reduces storage costs while preserving critical events." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.514, + 0.907, + 0.547 + ], + "angle": 0, + "content": "- Detection is the process of using detection tools such as models and algorithms to detect anomalies in analyzed data to determine whether the data contains intrusions." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.547, + 0.907, + 0.581 + ], + "angle": 0, + "content": "- Investigation is the further process of Detection. It reconstructs the entire attack scenarios from the detected malicious data by analyzing the causal relationship between them." + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.481, + 0.907, + 0.581 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.583, + 0.91, + 0.634 + ], + "angle": 0, + "content": "Note that DL-IDS can also be performed in other step orders by skipping some of the steps. For example, log data can be first parsed before storage [135]. Attack investigation can be directly conducted without detection of intrusions [9]. This survey is organized by the common workflow." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.646, + 0.336, + 0.661 + ], + "angle": 0, + "content": "4 DATA MANAGEMENT" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.666, + 0.91, + 0.701 + ], + "angle": 0, + "content": "This section elaborates on the data management stage of DL-IDS, including data collection (Section 4.1), log storage (Section 4.2), and log parsing (Section 4.3)." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.713, + 0.288, + 0.727 + ], + "angle": 0, + "content": "4.1 Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.732, + 0.909, + 0.832 + ], + "angle": 0, + "content": "The first step of DL-IDS is to collect useful data from raw data. Raw data indicates records that document events, activities, and operations that occur within a system, application, or network (a.k.a., logs), represented by audit logs or application logs within hosts, or network traffic between hosts. By collecting useful logs, DL-IDS is allowed to monitor the health condition and operational status of information systems [141, 255]. Common attributes of logs include timestamp, event type, subject, object, description, etc." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.833, + 0.91, + 0.916 + ], + "angle": 0, + "content": "On different platforms, logs possess different formats and organizational structures [21, 127, 255, 270]. To counter this, researchers have created various log collection tools specialized for various systems. For example, in Windows systems, Event Viewer is employed to manage system logs. Yet in Linux systems, log files are usually saved in the /var/log/ directory. The classification of data collection tools is shown in Table 1, including Windows, Linux, Cloud, and Cross platforms." + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.515, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.885, + 0.084, + 0.908, + 0.095 + ], + "angle": 0, + "content": "1:7" + }, + { + "type": "table_caption", + "bbox": [ + 0.301, + 0.117, + 0.695, + 0.133 + ], + "angle": 0, + "content": "Table 1. Log collection tools on different platforms." + }, + { + "type": "table", + "bbox": [ + 0.139, + 0.147, + 0.86, + 0.389 + ], + "angle": 0, + "content": "
Platform TypeToolDescription
Windows platformETW [153]Providing developers comprehensive event tracing ability
Panorama [245]Hardware-level and OS-aware dynamic taint tracking
Linux platformauditd [68]Native tools supported by the Linux kernel
sysdig [106]Focusing on runtime monitoring and fault troubleshooting
CamFlow [170]Self-contained, easily maintainable implementation
Tracee [210]Exposing system information as events based on eBPF
DataTracker [200]Monitoring unmodified binaries without their source codes
Inspector [206]Parallel provenance library that is POSIX-compliant
AutoLog [94]Analyzing programs so no need to run them
eAudit [193]Fast, scalable and easily deployable data collection tools
Cloud platformK8S tools [27, 87]Adapting to cloud scenarios to meet enterprise needs
saBPF [129]An extension tool of eBPF for containers in cloud computing
ISDC [158]Eliminating overheads on in-network resources
Cross platformDTrace [66]Real-time tracing framework that supports many platforms
SPADE [61]Novel provenance kernel for cross-platform logging
" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.415, + 0.906, + 0.513 + ], + "angle": 0, + "content": "4.1.1 Windows Platform Tools. Event Tracing for Windows (ETW) [153] is a powerful event tracing mechanism provided by Microsoft. It consists of three components: providers, controllers, and consumers. ETW instruments applications to provide kernel event logging and allows developers to start and stop event tracing sessions momentarily. Panorama [245] exploits hardware-level and OS-aware dynamic taint tracking to collect logs. Moreover, it develops a series of automated tests to detect malware based on several kinds of anomalous behaviors." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.524, + 0.909, + 0.856 + ], + "angle": 0, + "content": "4.1.2 Linux Platform Tools. auditid [68] is a native collection tool supported by the Linux kernel, which is responsible for writing audit logs to disk and monitoring a variety of auditable events such as system calls, file accesses, and modifications. sysdig [106] relies on the kernel module to achieve monitoring and data collection of the system. sysdig focuses on system runtime monitoring and fault troubleshooting, which is also widely used in containers and cloud-native environments. CamFlow [170] designs a self-contained, easily maintainable implementation of whole-system provenance based on Linux Security Module, NetFilter, and other kernel facilities. Furthermore, it provides a mechanism to adapt the captured data provenance to applications and can be integrated across distributed systems. Tracee [210] takes advantage of the extended Berkeley Packet Filter (eBPF) framework to observe systems efficiently. It uses eBPF to tap into systems and expose that information as events. DataTracker [200] is an open-source data provenance collection tool using dynamic instrumentation. It is able to identify data provenance relations of unmodified binaries without access to or knowledge of the source codes. Inspector [206] is a Portable Operating System Interface (POSIX)-compliant data provenance library for shared-memory multi-threaded applications. It is implemented as a parallel provenance algorithm on a concurrent provenance graph. AutoLog [94] generates runtime log sequences by analyzing source codes and does not need to execute any programs. It can efficiently produce log datasets (e.g., over 10,000 messages/min on Java projects) and has the flexibility to adapt to several scenarios. eAudit [193] is a scalable and easily deployable data collection tools. eAudit relies on the eBPF framework built into recent Linux versions, making it work out of the box on most of the Linux distributions." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.865, + 0.912, + 0.916 + ], + "angle": 0, + "content": "4.1.3 Cloud Platform Tools. Although some collection tools in Windows and Linux platforms such as auditd [68], sysdig [106], and Tracee [210] can be applied in cloud computing environment, cloud-native scenarios introduce different challenges compared with Windows or Linux platforms. First," + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.114, + 0.095 + ], + "angle": 0, + "content": "1:8" + }, + { + "type": "header", + "bbox": [ + 0.238, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.118, + 0.909, + 0.334 + ], + "angle": 0, + "content": "there are many different types of components such as containers, microservices, and Kubernetes (K8S) clusters in cloud platforms, each of which generates its own logs with varying formats and contents. Additionally, components are basically characterized by dynamic expansion and contraction, making it hard to capture complete log data. To address them, Chen et al. [27] design a cloud log collection architecture on the basis of K8S, which is a central platform based on cloud-native technology. Josef et al. [87] propose a log collection and analysis tool operated as Software as a Service (SaaS) in the cloud environment in K8S technology, aiming to provide comprehensive logs across all microservices. saBPF [129] is an extension tool of eBPF, aiming to deploy fully-configurable, high-fidelity, system-level audit mechanisms at the granularity of containers. saBPF is further developed with proof-of-concept IDS and access control mechanism to demonstrate its practicability. ISDC [158] is designed to eliminate the bottleneck between network infrastructure (where data is generated) and security application servers (where data is consumed), which prioritizes specific flows to effectively optimize resource consumption." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.343, + 0.909, + 0.476 + ], + "angle": 0, + "content": "4.1.4 Cross-platform Tools. To effectively detect intrusions, an intuitive idea is to incorporate log data from various platforms to obtain a global view of the running system. DTrace [66] is a real-time dynamic tracing framework for troubleshooting kernel and application problems on production systems. It supports many platforms, including Linux, Windows, Solaris, macOS, FreeBSD, NetBSD, etc. Support for Provenance Auditing in Distributed Environments (SPADE) [61] develops a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It supports heterogeneous aggregating for system-level data provenance for data analysis across multiple platforms." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.489, + 0.252, + 0.505 + ], + "angle": 0, + "content": "4.2 Log Storage" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.508, + 0.909, + 0.543 + ], + "angle": 0, + "content": "The subsequent step of log collection is to store these logs [11, 40]. We will introduce two essential components for data storage: log storage systems and compression algorithms for these systems." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.55, + 0.909, + 0.783 + ], + "angle": 0, + "content": "4.2.1 Log Storage Systems. The two most commonly used log storage systems are ELK [5] and Loki [15]. ELK is a powerful log management solution consisting of three open-source software components: Elasticsearch [48], Logstash [47], and Kibana [49]. Elasticsearch [48] is the leading distributed, RESTful search and analytics data engine designed with speed and scalability. Logstash [47] is a server-side data preprocessing pipeline to collect and integrate data from multiple sources. Kibana [49] is a data analytics and visualization platform at both speed and scale. ELK is powerful enough to be applied in enterprise scenarios, however, its performance comes at a price. ELK sacrifices ease of configuration and installation, and may simultaneously introduce severe runtime overhead for its hosts. In contrast, Loki [15] is a lightweight logging system with low resource overhead developed by Grafana Labs. It is designed with simple operations and efficient storage. Instead of indexing everything of data like ELK does, Loki mainly creates indices grounded in log labels. Moreover, Loki is well suited for open-source monitoring and visualization tools such as Prometheus [174] and Grafana [112]. Integrating these two tools enables Loki to construct a complete monitoring and log analysis platform for information systems." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.791, + 0.909, + 0.858 + ], + "angle": 0, + "content": "4.2.2 Log Compression Algorithms. Logs are generated quickly and require significant memory usage. For example, it is measured that a browser can produce about 10 GB of log data each day [40]. Such oversize data should be compressed before storage. Log compression algorithms can be categorized into two types: general-purpose algorithms and those specifically adapted to log data." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.866, + 0.909, + 0.916 + ], + "angle": 0, + "content": "General Compression Algorithms. General compression algorithms refer to algorithms to reduce the size of data (e.g., log data) by handling token-level or byte-level duplicates in the data. General compression algorithms can be classified into three categories based on their principles [242]:" + }, + { + "type": "footer", + "bbox": [ + 0.092, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.885, + 0.084, + 0.908, + 0.096 + ], + "angle": 0, + "content": "1:9" + }, + { + "type": "table_caption", + "bbox": [ + 0.215, + 0.117, + 0.782, + 0.133 + ], + "angle": 0, + "content": "Table 2. Well-acknowledged general compression algorithms for log data." + }, + { + "type": "table", + "bbox": [ + 0.179, + 0.147, + 0.821, + 0.222 + ], + "angle": 0, + "content": "
TypeWell-acknowledged compression algorithm
Dictionary-basedLZ77 in gzip [55], LZMA in 7zip_lzma [171], and LZSS in quickLZ [177]
Sorting-basedBWT in zip2 [194] andST in szip [190]
Statistical-basedPPMD in 7zip(ppmd and DMC in ocamyd [191]
" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.245, + 0.907, + 0.278 + ], + "angle": 0, + "content": "- Dictionary-based Compression: It records repeated data as keys and replaces these data with their corresponding keys." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.279, + 0.908, + 0.295 + ], + "angle": 0, + "content": "- Sorting-based Compression: It sorts data to enable strategies that require ordering features." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.295, + 0.908, + 0.329 + ], + "angle": 0, + "content": "- Statistical-based Compression: It exploits statistical techniques to learn and predict the possible next token for existing tokens. The data is thus compressed as a statistical model." + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.245, + 0.908, + 0.329 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.332, + 0.911, + 0.416 + ], + "angle": 0, + "content": "Table 2 presents representative algorithms of the above three types. Due to the indeterminacy of statistical techniques, statistical-based compression algorithms may introduce losses in compression. Yet the other two types of algorithms are generally lossless. By validating 9 log files and 2 natural language files, a study [242] shows that some general compression algorithms can achieve high compression ratios for log data and log data is even easier to compress than natural language data." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.424, + 0.911, + 0.74 + ], + "angle": 0, + "content": "Tailored Compression Algorithms. Different from natural language data, log data usually has specific structures and formal expressions that help further compression. Yao et al. [243] propose LogBlock, which obtains small log blocks before compression and then uses a generic compressor to compress logs. Liu et al. [135] propose Logzip, which employs clustering algorithms to iteratively extract templates from raw logs and then obtain coherent intermediate representations for compressing logs. Rodrigues et al. [186] propose the lossless compression tool CLP, aiming to quickly retrieve log data while meeting compression requirements. CLP proposes to combine domain-specific compression and search with a generic lightweight compression algorithm. Li et al. [123] conduct empirical research on log data and propose LogShrink to overcome their observed limitations by leveraging the commonality and variability of log data. LogBlock [243] is designed to help existing jobs perform better. It reduces duplicate logs by preprocessing log headers and rearranging log contents, thereby improving the compression ratio of log files. LogReduceer [247] is a framework that combines log hotspot identification and online dynamic log filtering. Its non-intrusive design significantly reduces log storage and runtime overhead. \\(\\mu\\)Slope [217] is a compression and search method for semi-structured log data. It achieves efficient storage and query performance through data segmentation, pattern extraction, and index-free design. Denum [249] significantly improves log compression rates by optimizing the compression of digital tokens in logs. It is an efficient log compression tool suitable for scenarios where you need to save storage space or transmission bandwidth." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.753, + 0.254, + 0.77 + ], + "angle": 0, + "content": "4.3 Log Parsing" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.773, + 0.911, + 0.875 + ], + "angle": 0, + "content": "Log data often originates from multiple different devices such as terminals, sensors, and network devices. To analyze it, log parsers are employed to format them into structured and unified ones. Log parsing is usually executed by data classification and template extraction. Data classification is to classify log data into several groups. Each group constitutes a template for extracting features from log data and constructing the structured logs. As shown in Figure 5, the existing log parsers can be taxonomized into 3 categories: clustering-based, pattern-based, and heuristic-based parsers." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.882, + 0.909, + 0.917 + ], + "angle": 0, + "content": "4.3.1 Clustering-based Parsing. Clustering-based parsers classify data using clustering algorithms for log parsing. Xiao et al. [226] propose LPV, which employs a hierarchical clustering algorithm" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.084, + 0.121, + 0.096 + ], + "angle": 0, + "content": "1:10" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "image", + "bbox": [ + 0.13, + 0.118, + 0.875, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.366, + 0.24, + 0.632, + 0.256 + ], + "angle": 0, + "content": "Fig. 5. Taxonomy of data parsing." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.281, + 0.91, + 0.4 + ], + "angle": 0, + "content": "to incrementally group logs based on Euclidean distance. Hamooni et al. [74] present a rapid log pattern recognition approach named LogMine. It is implemented in the map-reduce framework for distributed platforms to process millions of log messages in seconds. LogCluster [130] reduces the number of logs that need to be manually checked and improves the accuracy of problem identification through log clustering and the use of knowledge bases. METING [32] provides a robust and efficient log parsing method through frequent n-gram mining and flexible log grouping strategy, which can effectively process various types of log data." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.407, + 0.911, + 0.592 + ], + "angle": 0, + "content": "4.3.2 Frequency-based Parsing. Frequency-based parsers discover patterns that exceed the frequency threshold and employ the mined patterns to parse logs. Sedki et al. [192] propose the log parsing tool ULP, which combines string matching and local frequency analysis to efficiently parse large log files. Dai et al. [35] propose Logram, which utilizes an n-gram dictionary for log parsing. For n-grams with a frequency below the threshold, Logram recursively converts to (n-1)-grams until a list of uncommon 2-grams is obtained. To mitigate the parameter sensitivity issue in log parsers, Dai et al. [36] further proposed an entropy-based log parser PILAR, which balances parsing accuracy and efficiency. Xu et al. [229] propose a hybrid log parsing model called Hue, which performs parsing through user-adaptive methods. Prefix-Graph [30] is an efficient, adaptive, and universal log parsing method that can stably extract log templates without relying on domain knowledge and manual parameter tuning." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.599, + 0.91, + 0.834 + ], + "angle": 0, + "content": "4.3.3 Heuristic-based Parsing. Heuristic-based parsers rely on empirical knowledge to classify log data. He et al. [82] propose the online log parsing method Drain, which employs a depth-fixed parsing tree to group the original logs and encodes them using specially designed parsing rules. Le et al. [114] propose to use a hint-based few-sample learning algorithm, LogPPT, to capture log template patterns. Utilizing new prompt tuning methods and an adaptive random sampling algorithm, LogPPT performs well on multiple public datasets. Liu et al. [137] propose the UniParser parser to address the issue of difficult processing of heterogeneous logs, using the Token Encoder and Context Encoder modules to learn log context features. Spell [44] is an efficient streaming log parsing method that can dynamically extract log patterns in online processing and significantly improve processing efficiency through pre-filtering steps. Logan [3] achieves efficient and scalable log parsing through distributed processing, LCS matching, dynamic matching tolerance, and periodic merging. USTEP [214] is an online log parsing method based on an evolutionary tree structure that can discover and encode new parsing rules. It achieves constant parsing time and can efficiently parse raw log messages in a streaming manner." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.845, + 0.363, + 0.86 + ], + "angle": 0, + "content": "5 INTRUSION DETECTION" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.866, + 0.91, + 0.916 + ], + "angle": 0, + "content": "The intrusion detection stage uncovers intrusions relying on the semantic-level information. This section classifies and summarizes the mainstream graph summarization (Section 5.1), attack detection (Section 5.2), and attack investigation (Section 5.3)." + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.516, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:11" + }, + { + "type": "table_caption", + "bbox": [ + 0.285, + 0.117, + 0.712, + 0.133 + ], + "angle": 0, + "content": "Table 3. Overview of graph summarization approaches." + }, + { + "type": "table", + "bbox": [ + 0.103, + 0.147, + 0.896, + 0.375 + ], + "angle": 0, + "content": "
ModeApproachReleaseBaselineRequirement
OfflineProvCompress [228]2011No SummarizationNone
BEEP [115]2013No SummarizationInstrumentation
LogGC [116]2013BEEP + No SummarizationInstrumentation
CPR + PCAR [234]2016No SummarizationNone
FD + SD [89]2018CPR + PCARNone
LogApprox [152]2020GC + CPR + DPRNone
TeRed [122]2025LogGC + CPR + PCAR + F-DPR + NodeMergeNone
OnlineProTracer [143]2016BEEP + No SummarizationInstrumentation
NodeMerge [205]2018No SummarizationNone
Winnower [77]2018No SummarizationNone
GS + SS [267]2021FD + SDNone
SEAL [53]2021FDNone
FAuST [97]2022CPR + DPRNone
AudiTrim [202]2024CPR + GS + F-DPRNone
" + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.392, + 0.35, + 0.408 + ], + "angle": 0, + "content": "5.1 Graph Summarization" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.412, + 0.91, + 0.561 + ], + "angle": 0, + "content": "It is illustrated that stealthy malware will inevitably interact with the underlying OS and be captured by provenance monitoring systems [216], which is the reason why PIDS (a form of DL-IDS) has worked and flourished recently. Log data generated from provenance monitoring systems is referred to as data provenance as mentioned. Offering advantages in high precision, data provenance sacrifices memory performance to record all trails of events from their creations to their current states, even some of which are trivial. Unlike network traffic and application logs, data provenance is fine-grained, detailed, and rich in semantics. As a result, the token-level or byte-level log storage systems (Section 4.2.1) and log compression algorithms (Section 4.2.2) are insufficient to handle the memory efficiency of data provenance due to the absence of semantic-level information." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.562, + 0.91, + 0.629 + ], + "angle": 0, + "content": "To this end, graph summarization is investigated to further reduce the size of log data semantically. In graph summarization, data provenance is transformed into a provenance graph, of which the causal relations are utilized to build the semantic understanding of system activities. Referring to the definition of data provenance (Definition 3.7), provenance graph is defined as follows:" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.635, + 0.909, + 0.684 + ], + "angle": 0, + "content": "Definition 5.1. (Provenance Graph). Provenance graph is a representation of a collection of data provenance with causal relations. It is a directed acyclic graph \\( G = \\langle V, E \\rangle \\) where nodes \\( V \\) are system entities and edges \\( E \\) are system events." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.692, + 0.911, + 0.844 + ], + "angle": 0, + "content": "Provenance graphs allow graph summarization approaches to reduce the size of log data by confidently removing irrelevant events, aggregating similar events, gathering similar execution entities, etc. This categorizes them as a type of lossy reduction, yet the aforementioned log storage and compression are usually lossless (except for statistical-based log compression). We note that some surveys (e.g., [96, 270]) may interchangeably use graph summarization and log compression to identify the approaches that reduce the size of log data. In this work, we explicitly distinguish them and refer to the lossless reduction as compression and the opposite one as summarization. Table 3 presents the overview of graph summarization approaches. We classify them into two categories: offline graph summarization and online graph summarization." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.849, + 0.91, + 0.918 + ], + "angle": 0, + "content": "5.1.1 Offline Graph Summarization. Offline graph summarization requires historical log data to provide global knowledge, which extracts log data from persistent storage, summarizes the data, and pushes back the summarized data to the persistent storage. In 2011, Xie et al. [228] take inspiration from web graphs to summarize provenance graphs. They argue that provenance" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:12" + }, + { + "type": "header", + "bbox": [ + 0.238, + 0.084, + 0.906, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.118, + 0.907, + 0.367 + ], + "angle": 0, + "content": "graphs have similar organizational structure and characteristics to web graphs, such as locality, similarity, and consecutiveness. BEEP [115] is developed based on the fact that a long-running execution can be partitioned into individual units. BEEP reverse engineers application binaries and instructions to perform selective logging for unit boundaries and unit dependencies. LogGC [116] is a summarized audit log system that can be invoked at any time during the system execution. Xu et al. [234] propose an aggregation algorithm PCR that preserves event dependencies during log data reduction. They further propose an algorithm named PCAR that utilizes domain knowledge to conduct graph summarization. Hossain et al. [89] propose two dependency-preserving graph summarization approaches, FD and SD. FD is allowed to keep backward and forward forensic analysis results. SD preserves the results of common forensic analysis, which runs backward to find the entry points of intrusions and then runs forward from these points to unveil their impacts. LogApprox [152] aims to summarize the most space-intensive events found in logs, namely file I/O activity, which can account for up to \\(90\\%\\) of the log content. TeRed [122] employs unit tests to learn the system's normal behavior patterns for reducing provenance graphs, allowing it not to impact attack detection and investigation." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.393, + 0.907, + 0.725 + ], + "angle": 0, + "content": "5.1.2 Online Graph Summarization. Online graph summarization performs real-time summarization for continually coming provenance graphs, rather than dealing with a static provenance graph. ProTracer [143] alternates between system event logging and unit-level taint propagation. It has a lightweight kernel module and user space daemon for concurrent, out-of-order event processing. NodeMerge [205] is a template-based graph summarization system for online event storage. It can directly work on the system-dependent provenance streams and compress data provenance via read-only file access patterns. Winnower [77] is an extensible audit-based cluster monitoring system. For tasks replicated across nodes in distributed applications, it can define a model over audit logs to concisely summarize the behaviors of multiple nodes, thus eliminating the necessity of transmitting redundant audit records to the central monitoring node. The approach proposed by Zhu et al. [267] includes two real-time graph summarization strategies. The first strategy maintains global semantics, which identifies and removes redundant events that do not affect global dependencies. The second strategy is based on suspicious semantics. SEAL [53] is a novel graph summarization approach for causal analysis. Based on information-theoretic observations of system event data, it achieves lossless compression and supports real-time historical event retrieval. FAuST [97] is a logging daemon that performs transparent and modular graph summarization directly on system endpoints. FAuST consists of modular parsers that parse different audit log formats to create a unified in-memory provenance graph representation. AudiTrim [202] is an efficient graph summarization approach that reduces log sizes without impacting user experiences, which allows adaptable deployment on different operating systems." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.754, + 0.296, + 0.769 + ], + "angle": 0, + "content": "5.2 Attack Detection" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.775, + 0.907, + 0.841 + ], + "angle": 0, + "content": "Attack detection is located at the central position of DL-IDS. The objective of attack detection is to accurately identify malicious system events in log data while minimizing false alarms of normal system behaviors. Based on the types of log data, we categorize the attack detection approaches into audit log-based, application log-based, network traffic-based, and hybrid log-based detectors." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.841, + 0.907, + 0.907 + ], + "angle": 0, + "content": "The overview and taxonomy of attack detection approaches are presented in Table 4. We note that recent years have also published many other academic papers for attack detection [25, 46, 78, 119, 156, 218, 224, 227, 248]. Yet these papers are slightly related to DL-IDS, which are thus excluded in our survey for conciseness." + }, + { + "type": "footer", + "bbox": [ + 0.092, + 0.935, + 0.512, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:13" + }, + { + "type": "table_caption", + "bbox": [ + 0.248, + 0.117, + 0.748, + 0.133 + ], + "angle": 0, + "content": "Table 4. Overview and taxonomy of attack detection approaches." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.146, + 0.903, + 0.888 + ], + "angle": 0, + "content": "
Data TypeTaxonomyApproachRelease TimeBase ModelDetection StyleDetection Granularity
Audit LogTraditional LearningStreamSpot [145]2018K-MedoidsOnlineSubgraph
Unicorn [76]2020K-MedoidsOnlineNode, Subgraph
DistDet [42]2023HSTOnlineSubgraph
Velox [18]2025FCNOnlineNode
Graph Neural NetworkShadeWatcher [250]2022TransROfflineNode
threaTrace [219]2022GraphSAGEOnlineNode
ProGrapher [237]2023graph2vecOnlineSubgraph
MAGIC [99]2024GATOnlineNode, Subgraph
Flash [182]2024GraphSAGEOnlineNode
R-caid [65]2024GNNOfflineNode
Argus [230]2024MPNN, GRU-Node
TAPAS [252]2025LSTM-GRUOnlineTask
Application LogTraditional LearningWei et al. [231]2009PCA, TF-IDF-Log Entry
Bodik et al. [19]2010Logistic RegressionOnlineLog Entry
AMOD [43]2018SVM HYBRIDOnlineLog Entry
Sequence Neural NetworkDeepLog [45]2017LSTMOnlineLog Entry
LogRobust [257]2019Attention LSTM-Log Entry
LogAnomaly [151]2019template2vec, LSTMOnlineLog Entry
LogC [246]2020LSTMOnlineLog Entry
NeuralLog [113]2021BERT-Log Entry
PLELog [238]2021Attention GRUOnlineLog Entry
SpikeLog [175]2023DSNN-Log Entry
LogCraft [254]2024Meta Learning-Log Entry
Tweezers [33]2024GATv2, BERTweetOnlineLog Entry
LogSer [23]2024BERTOnlineLog Entry
LogDLR[265]2025Transformer, SBERTOnlineLog Entry
Traffic LogTraditional LearningNetPro [121]2017Merkle Hash TreeOnlineRoute
CATH [72]2019Cusp ModelOnlineFlow
Whisper [56]2021K-Means-Host
SigML++ [211]2023ANN-Encrypted Log
OADSD [253]2023Isolation ForestOnlinePacket
LtRFT [204]2023LambdaMARTOfflinePacket
AGC [225]2025Clustering-Packet
Graph and Sequence Neural NetworkKitsune [159]2018AutoEncoderOnlinePacket
MT-FlowFormer [260]2022Transformer-Flow
I²RNN [199]2022I²RNN-Packet
ERNN [262]2022ERNN-Flow
Euler [108]2023GNN, RNN-Flow
pVoxel [58]2023--Packet, Flow
NetVigil [91]2024E-GraphSage-Flow
Exosphere [57]2024CNN-Packet
DFNet [263]2024DFNet-Packet
RFH-HELAD [264]2024RPGAN, Deep kNN-Packet
ReTrial [259]2024Bayesian InferenceOnlineFlow
HEN [221]2024AE-LSTM-Packet, Flow
TCG-IDS [222]2025TGNOnlineFlow
A-NIDS[251]2025Stacked CTGANOnlineFlow
GTAE-IDS[62]2025Graph TransformerOnlinePacket, Flow
HybridHybridOWAD [75]2024AutoencoderOnlineHybrid
FG-CIBGC [165]2025DisenGCN, ICL-Hybrid
" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:14" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.119, + 0.911, + 0.186 + ], + "angle": 0, + "content": "5.2.1 Audit Log-based Detectors. Audit logs are collected from hosts and thus detectors based on them are basically referred to as HIDS. Audit logs provide fine-grained information through provenance graphs to depict system behaviors. Depending on the learning techniques, audit log-based detectors can be further classified as traditional learning and graph neural network." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.193, + 0.911, + 0.344 + ], + "angle": 0, + "content": "Traditional Learning. Traditional learning-based detectors refer to those that utilize naive machine learning techniques. StreamSpot [145] is a clustering-based anomaly detection that tackles challenges in heterogeneity and streaming nature. Unicorn [76] is a real-time intrusion detector that efficiently constructs a streaming histogram to represent the history of system executions. The counting results within the histogram are updated immediately if new edges (or events) occur. DistDet [42] is a distributed detection system that builds host models in the client side, filters false alarms based on their semantics, and derives global models to complement the host models. Velox [18] derives from Orthrus and replaces the complex TGN-based encoder with a simple fully-connected network (FCN), leading to a lightweight and efficient neural network." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.351, + 0.91, + 0.768 + ], + "angle": 0, + "content": "Graph Neural Network. GNN is demonstrated to do well in processing provenance graphs [99, 182, 219, 237, 250]. ProGrapher [237] extracts temporal-ordered provenance graph snapshots from the ingested logs, and applies whole graph embedding and sequence-based learning to capture rich structural properties of them. The key GNN technique leveraged by ProGrapher is graph2vec. ShadeWatcher [250] is a recommendation-guided intrusion detector using provenance graphs. It borrows the recommendation concepts of user-item interactions into security concepts of system entity interactions and analyzes cyber threats in an automated and adaptive manner. threaTrace [219] emerges as an online approach dedicated to detecting host-based threats at the node level. Its GNN model is a tailored GraphSAGE [73] for learning rich contextual information in provenance graphs. MAGIC [99] leverages Graph Attention Network (GAT) [213] as its graph representation module. MAGIC employs masked graph representation learning to incorporate the capability of pretraining. It can adapt to concept drift with minimal computational overhead, making it applicable to real-world online APT detection. Flash [182] is a comprehensive and scalable approach on data provenance graphs to overcome the limitations in accuracy, practicality, and scalability. Flash incorporates a novel adaptation of a GNN-based contextual encoder to encode both local and global graph structures into node embeddings efficiently. R-caid [65] first incorporates root cause analysis into PIDS. Before training GNNs, R-caid links nodes to their root causes to build a new graph, intending to prevent it from mimicry and evasion attacks. Argus [230] finds the performance of the prior IDS is questionable on large scale. It thus devises a form of discrete temporal graph and uses encoder-decoder unsupervised learning to detect different types of attacks. TAPAS [252] leverages a stacked LSTM-GRU model and a task-guided segmentation algorithm to reduce the spatiotemporal dimensions of APT detection, achieving efficient, low-cost, and accurate detection. In addition to the aforementioned detectors, recent researchers have developed numerous useful tools for better understanding audit logs, such as data visualization analysis tool [133] and counterfactual-driven attack explanation generator [223]." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.775, + 0.91, + 0.826 + ], + "angle": 0, + "content": "5.2.2 Application Log-based Detectors. Application logs are generated from the installed binaries. Generally, application logs are in the form of natural language text, namely sequence data. It is thus common to introduce sequence-based DL techniques into application log-based DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.833, + 0.911, + 0.918 + ], + "angle": 0, + "content": "Traditional Learning. For traditional learning, Wei et al. [231] propose a general methodology to mine rich semantic information in console logs to detect large-scale system problems. Bodik et al. [19] leverage a logistic regression model on a new and efficient representation of a datacenter's state called fingerprint to detect previously seen performance crises in that datacenter. AMOD [43] uses the SVM HYBRID strategy to filter query annotations from web request logs and then" + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.516, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.084, + 0.908, + 0.096 + ], + "angle": 0, + "content": "1:15" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.119, + 0.907, + 0.152 + ], + "angle": 0, + "content": "update the stacked generalization detection model to efficiently detect web code injection attacks and obtain malicious queries to update the web application firewall (WAF) library." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.162, + 0.911, + 0.612 + ], + "angle": 0, + "content": "Sequence Neural Network. Due to the similarity between application logs and natural language texts, sequence neural networks such as Recurrent Neural Network [86] and Transformer [39, 212] are widely employed. DeepLog [45] employs LSTM to model system logs as natural language sequences. It is able to automatically learn benign log patterns and detect anomalies when there is a deviation between log patterns and the trained model. LogRobust [257] finds previous methods do not work well under the close-world assumption and utilizes an attention-based LSTM model to handle unstable log events and sequences. LogAnomaly [151] identifies previous studies tend to cause false alarms by using indexes rather than semantics of log templates. Empowered by a novel, simple yet effective method termed template2vec, LogAnomaly is proven to successfully detect both sequential and quantitative log anomalies simultaneously. LogC [246] is a new log-based anomaly detection approach with component-aware analysis. It feeds both log template sequences and component sequences to train a combined LSTM model for detecting anomalous logs. NeuralLog [113] targets the performance caused by log parsing errors such as out-of-vocabulary words and semantic misunderstandings and employ BERT to perform neural representation. PLELog [238] is a semi-supervised anomaly detection approach that can get rid of time-consuming manual labeling and incorporate the knowledge on historical anomalies. SpikeLog [175] adopts a weakly supervised approach to train an anomaly score model, with the objective of handling a more reasonable premise scenario where a large number of logs are unlabeled. LogCraft [254] is an end-to-end unsupervised log anomaly detection framework based on automated machine learning, which mitigates the cost of understanding datasets and makes multiple attempts for building algorithms. Tweezers [33] uses a large language model to identify entities and build a relationship graph, and generates embeddings through graph attention network optimization to achieve security incident detection. LogSer [23] parses logs by preprocessing parameters, splitting logs, tree parsing, and template merging. It then inputs relevant embeddings into BERT training to detect anomalies, generate reports, and perform incremental updates. LogDLR [265] uses SBERT embeddings and a Transformer autoencoder with domain adversarial training to learn domain-invariant features, detecting anomalies via reconstruction error." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.622, + 0.91, + 0.687 + ], + "angle": 0, + "content": "5.2.3 Network Traffic-based Detectors. Network traffic comes from communications between hosts across a computer network. It is ruled by network protocols such as Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) and can be utilized for intrusion detection. Basically, network traffic-based detectors are termed NIDS." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.699, + 0.912, + 0.916 + ], + "angle": 0, + "content": "Traditional Learning. Given the fact that network traffic is usually encrypted for secure communications, feature engineering-guided machine learning is widely applied in NIDS. NetPro [121] employs traceability reasoning with Merkle Hash Trees and digital signatures to detect direct and indirect MANET routing attacks while preserving node privacy, and outputs a traceability graph to identify malicious nodes and behaviors. CATH [72] is a catastrophe-theory-based approach for DoS detection in software-defined networks (SDNs), which leverages the selection, normalization, and fusion of statistical flow attributes to model network states. Whisper [56] pays attention to both high accuracy and high throughput by utilizing frequency domain features. SigML++ [211] is an extension of SigML for supervised anomaly detection approach. SigML++ employs Fully Homomorphic Encryption and Artificial Neural Network (ANN) for detection, resulting in execution without decrypting the logs. OADSD [253] achieves task independently and has the ability of adapting to the environment over SD-WAN by using On-demand Evolving Isolation Forest. LtRFT [204] innovatively introduces Learning-To-Rank scheme for mitigating the low-rate DDoS" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:16" + }, + { + "type": "header", + "bbox": [ + 0.238, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.118, + 0.906, + 0.168 + ], + "angle": 0, + "content": "attacks targeted at flow tables. AGC [225] maps the original data into the embedding space through embedding learning to obtain more representative anchor points, thus achieving fine-grained classification of low-quality label data." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.174, + 0.906, + 0.675 + ], + "angle": 0, + "content": "Graph and Sequence Neural Network. In network traffic, packets consist of various contents and their flows can be represented as graphs. As a result, both graph neural network and sequence neural network are adopted in NIDS. Kitsune [159] is a plug and play NIDS that is allowed to detect attacks efficiently on the local network without supervision. It alleviates the problem that network gateways and router devices simply do not have the memory or processing power. MT-FlowFormer [260] is a semi-supervised framework to mitigate the lack of a mechanism for modeling correlations between flows and the requirement of a large volume of manually labeled data. \\(\\mathrm{I}^2\\mathrm{RNN}\\) [199] is an incremental and interpretable RNN for encrypted traffic classification, which can be efficiently adapted for incremental traffic types. ERNN [262] represents error-resilient RNN, which is a robust and end-to-end RNN model specially designed against network-induced phenomena. Euler [108] accelerates the most memory-intensive part, message-passing stage within GNN, with several concurrently-executed replicated GNNs. pVoxel [58] is an unsupervised method that proposes to leverage point cloud analysis to reduce false positives for the previous NIDS such as Whisper and Kitsune without requiring any prior knowledge on the alarms. NetVigil [91] is specially designed for east-west traffic within data center networks. It utilizes E-GraphSage and contrastive learning techniques to strengthen its resilience. Exosphere [57] detects flooding attacks by analyzing packet length patterns, without investigating any information in encrypted packets. DFNet [263] is a DDoS prevention paradigm denoted by preference-driven and in-network enforced shaping. RFH-HELAD [264] consists of a \\(K\\) classification model based on a deep neural network and a \\(K + 1\\) classification combining GAN and Deep kNN for detecting anomalies in network traffic. ReTrial [259] employs an improved graph attention network with Bayesian and EM algorithms to iteratively correct misleading links, enabling robust detection of encrypted malicious traffic. HEN [221] uses SMOTE to enhance data, trains LightGBM, generates explanations via SHAP, trains AE-LSTM to reconstruct SHAP values, sets a threshold from training errors, and marks test traffic with excess errors as attacks for intrusion detection. TCG-IDS [222] is the first self-supervised temporal contrastive GNN for network intrusion detection, capturing spatiotemporal traffic dependencies with high accuracy and low false alarms. A-NIDS [251] uses a shallow fully connected network for real-time detection and a Stacked CTGAN generator to address catastrophic forgetting and old data storage costs. GTAE-IDS [62] uses a graph autoencoder with a Transformer encoder and DNN decoder to learn benign traffic, enabling label-free, near-real-time intrusion detection and new attack identification." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.683, + 0.906, + 0.799 + ], + "angle": 0, + "content": "5.2.4 Hybrid Log-based Detectors. Based on the above discussions, a natural idea is to combine various types of log data for improving detection capability. OWAD [75] is a general framework to detect, explain, and adapt to normality shifts in practice. OWAD is validated to be effective in various detection granularity, covering provenance graphs, application logs, and network packets. FG-CIBGC [165] mines syncretic semantics in multi-source logs including audit logs, application logs, and network traffic using LLM under in-context learning, which generates behavior graphs for comprehensive analysis." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.812, + 0.329, + 0.829 + ], + "angle": 0, + "content": "5.3 Attack Investigation" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.832, + 0.906, + 0.916 + ], + "angle": 0, + "content": "Except for identifying individual intrusive nodes, IDS are supposed to detect the full story of intrusions (a.k.a., attack scenario graphs). This process is referred to as attack investigation, which can be done by directly detecting attack scenario graphs [216], or analyzing the causal relations between compromised nodes progressively to construct attack scenario graphs [9, 41, 100, 232]. The attack scenario graphs are defined with scenario graphs as follows:" + }, + { + "type": "footer", + "bbox": [ + 0.092, + 0.935, + 0.513, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.084, + 0.908, + 0.095 + ], + "angle": 0, + "content": "1:17" + }, + { + "type": "table_caption", + "bbox": [ + 0.292, + 0.117, + 0.704, + 0.133 + ], + "angle": 0, + "content": "Table 5. Overview of attack investigation approaches." + }, + { + "type": "table", + "bbox": [ + 0.095, + 0.147, + 0.905, + 0.361 + ], + "angle": 0, + "content": "
TaxonomyApproachRelease TimeAudit LogApplication LogBase ModelStarting NodeInvestigation Granularity
Traditional LearningProvDetector [216]2020doc2vecPath
BehaviorBaseline [269]2025FastTextPath
Sequence Neural NetworkATLAS [9]2021LSTMGraph
LogTracer [166]2022DeepLogPath
ConLBS [118]2023TransformerGraph
AirTag [41]2023BERTGraph
Graph Neural NetworkLiu et al. [134]2022struc2vecGraph
Karios [29]2023GNNGraph
TREC [139]2024GNNGraph
Orthrus [100]2025UniMPPath
Slot [176]2025GNNGraph
FeCoGraph [146]2025GCNGraph
" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.383, + 0.909, + 0.417 + ], + "angle": 0, + "content": "Definition 5.2. (Scenario Graph). Scenario graph is a subgraph of its given provenance graph, which is constructed by the nodes and edges causally dependent on nodes of interest." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.425, + 0.908, + 0.459 + ], + "angle": 0, + "content": "Definition 5.3. (Attack Scenario Graph). Attack scenario graph is a scenario graph where its nodes of interest are compromised nodes." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.466, + 0.911, + 0.6 + ], + "angle": 0, + "content": "In the past, attack investigation is conducted by forward analysis and backward analysis [88]. Forward analysis discovers the influence that nodes of interest will cause and backward analysis traces back how nodes of interest are generated. Benefiting from DL techniques, both forward and backward analysis can be achieved by learning patterns of attack scenario graphs. Furthermore, visual analytics techniques have been widely used to assist security analysts in understanding the causal chain of intrusions [256, 261]. Table 5 summarizes the overview of attack investigation approaches. Similar to Section 5.2, we exclude papers [6, 52, 60, 80, 88, 98, 111, 120, 142, 157, 218, 239, 268] slightly relevant to DL for conciseness." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.608, + 0.91, + 0.724 + ], + "angle": 0, + "content": "Traditional Learning. Unlike detecting intrusive nodes, attack scenario graphs are complicated and thus are hard to handle by traditional learning methods. ProvDetector [216] utilizes doc2vec to learn the embedding representation of paths in the provenance graph. Then a density-based detection is deployed to detect abnormal causal paths in the provenance graph. BehaviorBaseline [269] presents a novel learning-based anomaly detection method for large-scale provenance graphs. It incorporates dynamic graph processing with adaptive encoding and a tag-propagation framework for real-time detection." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.732, + 0.911, + 0.917 + ], + "angle": 0, + "content": "Sequence Neural Network. Log data is in the form of natural language text or is allowed to be transformed into sequences of events, which facilitates the introduction of sequence neural networks. ATLAS [9] is a framework to construct end-to-end attack stories from readily available audit logs, which employs a novel combination of causal analysis and natural language processing. ATLAS exploits LSTM to automatically learn the pattern difference between attack and nonattack sequences. LogTracer [166] is an efficient anomaly tracing framework that combines data provenance and system log detection together. An outlier function with an abnormal decay rate is introduced to improve the accuracy. ConLBS [118] combines a contrastive learning framework and multilayer Transformer network for behavior sequence classification. AirTag [41] employs unsupervised learning to train BERT directly from log texts rather than relying on provenance graphs. AirTag constructs attack scenario graphs by integrating the detected victim nodes." + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:18" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.119, + 0.91, + 0.403 + ], + "angle": 0, + "content": "Graph Neural Network. To capture causal relations within graphs, GNN is commonly adopted. Liu et al. [134] propose an automated attack detection and investigation method via learning the context semantics of the provenance graph. The provenance graph analyzed by struc2vec captures temporal and causal dependencies of system events. Kairos [29] is a practical intrusion detection and investigation tool based on whole-system provenance. Kairos utilizes GNN to analyze system execution history, so that detects and reconstructs complex APTs. It employs a GNN-based encoder-decoder architecture to learn the temporal evolution of provenance graph structure changes and quantify the abnormal degree of each system event. TREC [139] abstracts APT attack investigation problem as a tactics / techniques recognition problem. TREC trains its model in a few-shot learning manner by adopting a Siamese neural network. Orthurus [100] identifies Quality of Attribution as the key factor contributing to whether or not the industry adopts IDS. It first detects malicious hosts using a GNN encoder and then reconstructs the attack path through dependency analysis. Slot [176], based on provenance graphs and graph reinforcement learning, uncovers hidden relationships among system behaviors, dynamically adapts to new activities and attack strategies, resists adversarial attacks, and automatically constructs attack chains. FeCoGraph [146] directly processes traffic embedding through line graphs to adapt to various GNNs, covering more attack scenarios while protecting data privacy." + }, + { + "type": "title", + "bbox": [ + 0.088, + 0.417, + 0.367, + 0.431 + ], + "angle": 0, + "content": "6 BENCHMARK DATASETS" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.438, + 0.908, + 0.471 + ], + "angle": 0, + "content": "DL-IDS relies on high-quality data to train an effective model. This section introduces the dimensions of datasets (Section 6.1) and some public datasets widely used in DL-IDS (Section 6.2)." + }, + { + "type": "title", + "bbox": [ + 0.088, + 0.486, + 0.362, + 0.501 + ], + "angle": 0, + "content": "6.1 Dimensions of Datasets" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.507, + 0.84, + 0.524 + ], + "angle": 0, + "content": "To illustrate the quality of DL-IDS datasets, it is general to use the following dimensions:" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.53, + 0.907, + 0.577 + ], + "angle": 0, + "content": "- Benign Scenarios: Benign data should cover benign behaviors and system activities to the greatest extent, enabling DL-IDS to learn patterns of benign behaviors to differentiate malicious behaviors." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.58, + 0.907, + 0.628 + ], + "angle": 0, + "content": "- Malicious Scenarios: Malicious data ought to incorporate typical attack scenarios while taking into account the diversity of attacks, including short-term and long-term attacks, as well as simple attacks and multi-stage attacks." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.629, + 0.907, + 0.661 + ], + "angle": 0, + "content": "- Ground-truth Labels: Data should be labeled as benign or malicious. For multi-stage attacks, it is useful to indicate the attack type or the attack stage it belongs to." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.663, + 0.907, + 0.711 + ], + "angle": 0, + "content": "- Data Granularities: Datasets can be in the form of different granularities. The most accepted one is to provide raw log data. Due to copyright concerns, some replicates [41, 99] merely provide post-processed log data without their processing source codes." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.712, + 0.907, + 0.761 + ], + "angle": 0, + "content": "- Operating Systems: The operating system determines the generalizability of the dataset. The more operating systems a dataset covers and the more common they are, the more comprehensively it can evaluate PIDS performance." + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.53, + 0.907, + 0.761 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.088, + 0.778, + 0.285, + 0.792 + ], + "angle": 0, + "content": "6.2 Public Datasets" + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.798, + 0.909, + 0.863 + ], + "angle": 0, + "content": "Publicly available datasets bring a lot of convenience to research on DL-IDS. However, some researchers use self-made datasets that are not publicly available, making it difficult for other researchers to reuse their datasets [46]. To address this issue, we collect and organize some open-source datasets for further studies, which are listed in Table 6." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.865, + 0.909, + 0.914 + ], + "angle": 0, + "content": "LANL Dataset [103] is collected within the internal computer network of Los Alamos National Laboratory's corporate. The dataset consists of 58 consecutive days of de-identified data, covering about 165 million events from 12 thousand users. To obtain, its data sources include Windows-based" + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.516, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.093, + 0.085, + 0.503, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.879, + 0.085, + 0.906, + 0.094 + ], + "angle": 0, + "content": "1:19" + }, + { + "type": "table_caption", + "bbox": [ + 0.092, + 0.118, + 0.908, + 0.147 + ], + "angle": 0, + "content": "Table 6. Overview of public datasets. W, L, F, A, M, and S represent the operating system of Windows, Linux, FreeBSD, Android, Mac, and supercomputer, respectively." + }, + { + "type": "table", + "bbox": [ + 0.158, + 0.162, + 0.84, + 0.388 + ], + "angle": 0, + "content": "
DatasetReleaseSizeScenariosLabelFormatSystem
LANL Dataset [103]201512 GB-Yes.txtW
StreamSpot [145]20162 GB1Yes.tsvL
AWSCTD [22]201839 GB-NoSQLiteW
DARPA TC E3 [38]2018366 GB [67]6NoCDMW, L, F, A
DARPA TC E5 [38]20192,699 GB [67]8NoCDMW, L, F, A
DARPA OpTC [37]20201,100 GB [13]-NoeCARW
Unicorn SC [76]2020147 GB2YesCDML
CERT Dataset [63, 131]202087 GB-Yes.csvW
LogChunks [20]202024.1 MB-Yes.txt-
Loghub [266]202077 GB--.txtW, L, M, S
ATLAS [9]20210.5 GB10Yes.txtW
ATLASv2 [184]20231210Yes.txtW
ProvSec [197]2023-11Yes.jsonL
AutoLabel [173]2025136 GB29Yes.jsonL
" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.432, + 0.908, + 0.464 + ], + "angle": 0, + "content": "authentication events, process start and stop events, DNS lookups, network flows, and a set of well-defined red teaming events." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.465, + 0.908, + 0.548 + ], + "angle": 0, + "content": "StreamSpot dataset [145] is made up of 1 attack and 5 benign scenarios. The attack scenario exploits a Flash vulnerability and gains root access to the visiting host by visiting a malicious drive-by download URL. The benign scenarios are relevant to normal browsing activity, specifically watching YouTube, browsing news pages, checking Gmail, downloading files, and playing a video game. All the scenarios are simulated through 100 automated tasks with the Selenium RC [208]." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.548, + 0.908, + 0.647 + ], + "angle": 0, + "content": "DARPA TC datasets [38] are sourced from the DARPA Transparent Computing (TC) program, identified by the number of engagements from E1 to E5. Among them, DARPA TC E3 is the most widely used. The TC program aims to make current computing systems transparent by providing high-fidelity visibility during system operations across all layers of software abstraction. Unfortunately, DARPA TC datasets are released without labels, and DARPA makes no warranties as to the correctness, accuracy, or usefulness of the datasets." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.647, + 0.908, + 0.763 + ], + "angle": 0, + "content": "DARPA Operationally Transparent Cyber (OpTC) [37] is a technology transition pilot study funded under Boston Fusion Corporate. The OpTC system architecture is based on the one used in TC program evaluation. In OpTC, every Windows 10 endpoint is equipped with an endpoint sensor that monitors post events, packs them into JSON records, and sends them to Kafka. A translation server aggregates the data into eCAR format and pushes them back to Kafka. OpTC scales TC components from 2 to 1,000 hosts. The dataset consists of approximately 1 TB of compressed JSON data in a highly instrumented environment over two weeks." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.763, + 0.908, + 0.863 + ], + "angle": 0, + "content": "Unicorn SC [76] is a dataset specifically designed for APT detection, proposed by Han et al., authors of the Unicorn model. The dataset includes two supply chain scenarios, wget and shell shock, where each scenario lasts for 3 days to simulate the long-term feature of APT attacks, resulting in provenance data containing 125 benign behaviors and 25 malicious behaviors. The data is saved in the form of provenance graphs, describing the causal relationships during the system execution process." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.863, + 0.908, + 0.912 + ], + "angle": 0, + "content": "CERT Dataset [131] is a collection of synthetic insider threat test datasets that provide both background and malicious actor synthetic data. It is developed by the CERT Division, in collaboration with ExactData, LLC, and under sponsorship from DARPA I2O. CERT dataset learned" + }, + { + "type": "footer", + "bbox": [ + 0.484, + 0.936, + 0.905, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:20" + }, + { + "type": "header", + "bbox": [ + 0.238, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.118, + 0.905, + 0.151 + ], + "angle": 0, + "content": "important lessons about the benefits and limitations of synthetic data in the cybersecurity domain and carefully discussed models of realism for synthetic data." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.152, + 0.906, + 0.234 + ], + "angle": 0, + "content": "LogChunks [20] is an application log dataset for build log analysis, containing 797 annotated Travis CI build logs from 80 GitHub repositories and 29 programming languages. These logs are from mature and popular projects, collected through repository, build, and log sampling. Each log in the dataset has manually labeled text blocks of build failure reasons, search keywords, and structural categories, and cross-validated with the original developers with an accuracy of \\(94.4\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.235, + 0.906, + 0.334 + ], + "angle": 0, + "content": "Loghub dataset [266] is a large collection of system log datasets, providing 19 real-world log data from various software systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. The objective of Loghub is to fill the significant gap between intelligent automated log analysis techniques and successful deployments in the industry. For the usage scenarios of Loghub, about \\(35\\%\\) are anomaly detection, \\(13\\%\\) are log analysis, and \\(8\\%\\) are security." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.335, + 0.906, + 0.417 + ], + "angle": 0, + "content": "ATLAS dataset [9] implements 10 attacks based on their detailed reports on real-world APT campaigns and generates audit logs in a controlled testbed environment. Among the ten attacks, four are from single host and the rest six are from multiple hosts. All attacks were developed and executed on Windows 7 32-bit virtual machines and took an hour to complete, along with a 24-hour-window audit logs for benign system behaviors." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.418, + 0.906, + 0.517 + ], + "angle": 0, + "content": "ATLASv2 dataset [184] enriches the ATLAS dataset with higher quality background noise and additional logging vantage points. In this dataset, two researchers use the victim machines as their primary work stations throughout the course of engagement, instead of depending on automated scripts to generate activity. System logging, in contrast, cover a five-day period, where the first four days simulate normal work days and the fifth day begins with benign activity then transitions into execution of the corresponding attack." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.518, + 0.906, + 0.584 + ], + "angle": 0, + "content": "ProvSec dataset [197] is created for system provenance forensic analysis. To fulfill data provenance requirements, ProvSec includes the full details of system calls including system parameters. In ProvSec, 11 realistic attack scenarios with real software vulnerabilities and exploits are used and an algorithm to improve the data quality in the system provenance forensics analysis is presented." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.584, + 0.906, + 0.65 + ], + "angle": 0, + "content": "AutoLabel dataset [173] automates fine-grained log labeling by reducing the labeling problem to obtaining an accurate attack subgraph in a provenance graph. Its experiments consist of 29 scenarios, including 25 real CVE vulnerabilities across 12 widely-used applications (spanning 5 programming languages) plus a Sandworm threat simulation by MITRE CTID." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.665, + 0.527, + 0.68 + ], + "angle": 0, + "content": "7 CHALLENGES AND FUTURE DIRECTIONS" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.686, + 0.906, + 0.752 + ], + "angle": 0, + "content": "After the detailed introduction to the data management stage and the intrusion detection stage, as well as the widely-used benchmark datasets, this section further discusses challenges encountered in existing DL-IDS and summarizes the corresponding visions. These include fundamental resources (Section 7.1), pre-trained large models (Section 7.2), and comprehensive applications (Section 7.3)." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.767, + 0.364, + 0.782 + ], + "angle": 0, + "content": "7.1 Fundamental Resources" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.788, + 0.906, + 0.82 + ], + "angle": 0, + "content": "Effective DL-IDS heavily depends on core fundamental resources such as datasets and computing facilities to develop [105]. Here, we will discuss their challenges one after the other." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.832, + 0.906, + 0.914 + ], + "angle": 0, + "content": "7.1.1 Poor Data Quality. Existing datasets for DL-IDS may contain errors, inaccuracies, or missing values. This leads to unreliable descriptions of system behaviors that may mislead DL-IDS. For example, in some cases of the DARPA TC dataset, the PROCESS object and its source fail to properly resolve conflicts, resulting in possible incorrect transformation. Besides, the acuity_level value of the FLOW object is 0, while the value range for this field in other objects is from 1 to 5. Another" + }, + { + "type": "footer", + "bbox": [ + 0.092, + 0.934, + 0.513, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.096 + ], + "angle": 0, + "content": "1:21" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.118, + 0.909, + 0.201 + ], + "angle": 0, + "content": "example could be the LogChunks [20] dataset. In this dataset, the content describing the failure reasons is possibly incomplete. This is because a chunk in LogChunks only contains a continuous substring of the log text and a failure reason may be described across multiple sections of the log. Moreover, LogChunks neglects the classification of failure reasons like test, compilation, and code inspection errors, which hinders further research from analyzing failure reasons." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.202, + 0.911, + 0.417 + ], + "angle": 0, + "content": "Meanwhile, high-quality ground-truth labels are hard to acquire, which is impeded by the contradiction between fine-grained manual labeling and automated label generation. On one hand, for unknown intrusions such as zero-day attacks, it is very labor-intensive for security analysts to correspond each attack scenario to certain log entries, although coarse-grained attack scenarios may have been acquired. The DAPRA TC dataset [38] is a typical example for this. It only provides a ground truth report for attack scenarios, which does not correspond to any specific log entries. Although a few researchers [219] provide the third-party ground-truth labels that are manually identified by themselves, we empirically find some ambiguities between their ground-truth labels and the official attack scenario report. These ambiguities have an obviously negative effect on DL-IDS, and to some extent, they may even cause the accumulation of errors. On the other hand, the development of automated labeling tools is in an awkward position. The log data is generated based on its given prior knowledge of intrusions [28], whereas the challenge of DL-IDS is to detect zero-day intrusions. This tends the development of such automated tools to be somewhat pointless." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.418, + 0.911, + 0.552 + ], + "angle": 0, + "content": "In addition, there are no unified and effective evaluation metrics for DL-IDS [29], which further weakens the potential of datasets. For example, precision, recall, F1 score are usually exploited in most studies [9, 99, 182, 216], while some papers [41] propose to use True Positive Rate (TPR) and False Positive Rate (FPR) as evaluation metrics. This makes the comparison experiments usually unfair and hard to tell if the validation is convincing. We also note that in many cases where the percentage of negatives (or malicious log entries) is low, sacrificing FPR can always significantly increase TPR. For example, sacrificing 1,000 false positives for one true positive might only increase FPR by \\(0.05\\%\\), but would increase TPR by \\(5\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.564, + 0.909, + 0.613 + ], + "angle": 0, + "content": "7.1.2 Insufficient Amount of Data. Although log data is generated very quickly (e.g., eBay generates 1.2 PB log data per day by 2018 [189]), DL-IDS is still facing challenges in insufficient amounts of data. Discounting the above data quality issues such as inaccuracies, the reasons are three-fold:" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.614, + 0.91, + 0.763 + ], + "angle": 0, + "content": "First, log data has an extremely large number of trivial events, which are proven ineffective and usually removed by graph summarization [237, 250]. For example, data provenance provides fine-grained information about memory-related events, such as data-to-memory mapping and protection of certain memory addresses. These memory-related events basically do not involve attacks, and unfortunately, are always orthogonal to the existing DL-IDS. However, to ensure the completeness requirement of data provenance and to capture very infrequent but inevitable memory attacks, these memory-related events are still recorded in benchmark datasets. As a result, the usable part of each dataset is rather small for DL-IDS, which can be reflected by the high summarization ratio achieved by graph summarization approaches (e.g., \\(70\\%\\) [234])." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.764, + 0.911, + 0.913 + ], + "angle": 0, + "content": "The second reason for an insufficient amount of data is the limited dataset representativeness. As observed in Table 6, most of the datasets have no more than 10 attack scenarios, not to mention that each of these attack scenarios has been carefully chosen by their authors. This limited number of attack scenarios suggests that existing datasets are almost impossible to represent the diversified attack methods, as the number of CVE records has already been over 280,000 [31]. Furthermore, the existing datasets such as DAPRA TC E3 [38] are collected in a specific experimental environment and may not cover other types of normal system behaviors, and are proven that a significant amount of synthetic data exists [133]. DARPA TC E5 [38] is unusable for most experiments due to the sparse and error-filled documentation. Unicorn SC [76] is generated by an idealized simulation" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:22" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.119, + 0.907, + 0.168 + ], + "angle": 0, + "content": "of supply chain scenarios, which means many real-world features are prone to be ignored in this dataset. Hence, training DL-IDS on these non-representative datasets could be a disaster for the computer systems that they protect." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.17, + 0.909, + 0.269 + ], + "angle": 0, + "content": "Finally, the accessibility of datasets further exacerbates the insufficient data problem. Due to privacy and copyright issues, some datasets may be proprietary or difficult to obtain [216, 218]. Moreover, ProvDetector [216] conducted a three-month system evaluation in an enterprise environment with 306 hosts and collected benign provenance data of 23 target programs. Yet this dataset has not been made public, rendering it unavailable to improve other DL-IDS and almost all the assessment settings related to ProvDetector are susceptible to inequity." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.277, + 0.911, + 0.461 + ], + "angle": 0, + "content": "7.1.3 Potential Heavy Computation Requirements. Similar to other DL techniques, DL-IDS also requires a potentially large amount of computing resources to improve their performance. According to [185], the generalizability of neural models is proportional to the investment of computing resources. Supposing that the challenge of insufficient data is mitigated and a large volume of log data is available, more computing resources are inevitably required. Besides, we will illustrate in Section 7.2 that there are plenty of powerful techniques that have not been introduced in DL-IDS, which will also bring in computation requirements. Unfortunately, acceleration methods like parallel computation and efficient retrieval have not been fully scheduled by the cybersecurity community. An example is that the computation time of Unicorn equipped with one core is proven linear to its workloads [76]. It is clear that the efficiency of Unicorn, which is not implemented in parallel, will reach the bottleneck as this core does." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.469, + 0.912, + 0.57 + ], + "angle": 0, + "content": "7.1.4 Future Directions. To conclude, the challenges for DL-IDS in fundamental resources consist of data quality, data volume, and computational overhead. Apart from unintentional errors and nontechnical issues in fundamental resources, the research questions that urgently need to be addressed include the contradiction between unaffordable manual labeling and non-generalizable auto-labeling techniques, non-unified benchmark datasets and evaluation metrics, as well as potential heavy computational overheads. Therefore, we summarize the future directions as follows:" + }, + { + "type": "list", + "bbox": [ + 0.088, + 0.277, + 0.912, + 0.57 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.159, + 0.581, + 0.327, + 0.595 + ], + "angle": 0, + "content": "Future Directions" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.606, + 0.838, + 0.653 + ], + "angle": 0, + "content": "- Developing efficient man-machine interactive log labeling mechanisms and organizing open-source data-sharing platforms accordingly to provide large amounts of high-quality datasets." + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.655, + 0.838, + 0.687 + ], + "angle": 0, + "content": "- Maintaining effective and comprehensive benchmark datasets, accompanied by a unified performance metric framework for a fair comparison." + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.689, + 0.838, + 0.721 + ], + "angle": 0, + "content": "- Investigating parallel or simplified strategies for DL-IDS, and studying their integration with log storage systems to achieve end-to-end acceleration." + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.606, + 0.838, + 0.721 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.746, + 0.494, + 0.763 + ], + "angle": 0, + "content": "7.2 Pre-training Theories and Techniques" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.766, + 0.909, + 0.832 + ], + "angle": 0, + "content": "In recent years, significant progress has been made by Large Language Models (LLMs) in the field of DL. Their capacity to understand and generate dialogue has been greatly enhanced as the model parameters of LLMs keep rising. T5 [179], BERT [39], GPT [178], GPT-4 [2], LaMDA [207], and LLaMA [209] are notable examples." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.833, + 0.909, + 0.916 + ], + "angle": 0, + "content": "With the development of pre-training techniques, LLMs have been adopted in many fields such as finance [258], education [164], medicine [172], and even other domains of cybersecurity [34, 69, 92]. In contrast, the adoption of LLMs in DL-IDS is stagnant, as shown in Figure 6. We can observe that LLMs developed at full speed beginning in 2019. Their prosperity, however, has not extended to DL-IDS. Until now, the only two DL-IDS that incorporate pre-training techniques, AirTag [41] and" + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.515, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.908, + 0.095 + ], + "angle": 0, + "content": "1:23" + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.125, + 0.905, + 0.349 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.088, + 0.378, + 0.908, + 0.424 + ], + "angle": 0, + "content": "Fig. 6. Interactions between DL models and DL-IDS. While DL models proposed before 2019 have already leveraged in DL-IDS, the emerging LLMs (or pre-training theories and the techniques) since 2020 remains underdeveloped in this domain." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.449, + 0.909, + 0.613 + ], + "angle": 0, + "content": "MAGIC [99], still do not make full use of the potential of LLMs. AirTag pre-trains a BERT model on application logs and detects intrusions in terms of embeddings generated by BERT. MAGIC introduces GraphMAE [90], a model architecture derived from Graph Autoencoder [109] in 2016 but integrated with the famous masked self-supervised learning method [81] in 2022, to conduct self-supervised learning on provenance graphs. MAGIC further designs an adapter to apply the pre-trained model in different detection scenarios. Nevertheless, both AirTag and MAGIC can be regarded as preliminary explorations of pre-training techniques. According to the scaling law [102], the performance of LLMs will steadily improve, as the parameters, data, and computation increase. And the reasoning ability of LLMs will suddenly emerge [220], allowing them to chat with humans smoothly. Such advantageous abilities obviously have not been incorporated into DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.616, + 0.91, + 0.665 + ], + "angle": 0, + "content": "Nowadays, some researchers [7, 59, 125, 160] have started to explore the applications of LLMs on DL-IDS. Yet the theories and techniques of such combination remain challenges. In the following, we will illustrate the identified issues and then point out the future directions." + }, + { + "type": "text", + "bbox": [ + 0.087, + 0.674, + 0.911, + 0.84 + ], + "angle": 0, + "content": "7.2.1 Trade-off between Reliability and Generalizability. The governing concern for the employment of LLMs in DL-IDS is reliability (or explainability). Although offering generalizability, LLMs have long been denounced to have issues with hallucinations [149, 241], privacy [84, 240, 244], overreliance [107], and backdoor threats [136]. These unexplainable and uncontrollable features are an absolute disaster for DL-IDS. For example, when feeding log data to LLMs, they sometimes are prone to hallucinate and provide wrong detection results. Attacks thus successfully bypass the detection facilities and can exfiltrate sensitive data in the victim computer systems. Another example for this is that sensitive information may leak from LLMs. Hui et al. [93] present a prompt leakage attack for LLMs, which is demonstrated to be effective in both offline settings and real-world LLM applications." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.849, + 0.912, + 0.916 + ], + "angle": 0, + "content": "7.2.2 Short of Statistical Log Modeling. LLMs are developed on the basis of statistical language modeling [101, 187], which is not insufficiently studied for log data. The statistical modeling of natural language can be traced back to the early 1950s when Shannon pioneered the technique of predicting the next element of natural language text [195] and discussed the n-gram model for" + }, + { + "type": "list", + "bbox": [ + 0.087, + 0.674, + 0.912, + 0.916 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:24" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "table_caption", + "bbox": [ + 0.092, + 0.117, + 0.908, + 0.148 + ], + "angle": 0, + "content": "Table 7. Comparison of research advances in statistical modeling of various data. \"NL\", \"PL\" and \"FL\" represent Natural Language, Programming Language, and Formal Language, respectively. Note that PL is a type of FL." + }, + { + "type": "table", + "bbox": [ + 0.107, + 0.162, + 0.887, + 0.257 + ], + "angle": 0, + "content": "
DataFormContent Generation RulesStatistical Modeling StudiesPre-training
TextNLGrammar, pragmatics, semantics, etc[101, 148, 187, 196]well-done
SpeechNLText rules (see above) and phonetics[104, 167]well-done
Source codePLLexical and syntactic definitions[8, 85, 180]well-done
LogNL + FLLog template defined by developersfuture workunderdeveloped
" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.313, + 0.908, + 0.445 + ], + "angle": 0, + "content": "English [196]. After that, as machine learning came into view of the NLP research communities, language modeling flourished, and many models such as TreeBank [148], word2vec [154, 155] and LSTM [86] were proposed. Over decades, researchers in NLP have gained solid knowledge of language modeling, whose interests gradually shifted to efficiency. An epoch-making model, Transformer [212], was presented using the multi-head self-attention mechanism to fulfill parallel computing, which was widely exploited in popular pre-trained models such as BERT [39] and GPT [2] afterward. It is evident that the success of LLMs comes from the prolonged studies on statistical language modeling." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.447, + 0.908, + 0.595 + ], + "angle": 0, + "content": "Unfortunately, there are almost no research efforts on statistical modeling of log data, resulting in pre-training techniques of DL-IDS remaining underdeveloped. By contrast, as illustrated in Table 7, the statistical modeling studies of other types of data have already started. Hindle et al. [85] demonstrate that the source code is very repetitive and predictable, and, in fact, even more so than natural language. Driven by such statistical modeling conclusion, DL-based source code applications [54, 70, 124, 126, 203, 233, 235] such as code generation and code clone detection flourish, many of which have already becomes common applications in LLMs. Similar cases can be found for speech data, whose applications are like text to speech [71, 169, 183] and speech recognition [14]." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.596, + 0.908, + 0.794 + ], + "angle": 0, + "content": "We argue that log data is also created by humans, similar to text, speech, and source code. It is generated according to developer-defined log templates, with a form of both natural language (e.g., application logs) and formal language (e.g., data provenance in CDM format). Given the fact that natural language (e.g., text and speech) and formal language (e.g., source code) both exhibit positive performance in pre-training, log data urgently demands statistical modeling achievements to facilitate its pre-training research. Although several works [96, 152] have discussed the features of log data, they are orthogonal to the explainable combination of DL and IDS. Compared with the other data types, challenges in statistical log modeling, for instance, may lie in that logs are extremely long and detailed for reliable purposes. It is very common that the length of one single log entry is the same as that of one paragraph in natural language texts. These challenges happen to be the shortcomings of LLMs - not being able to handle long text and not being trustworthy in generated contents." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.811, + 0.908, + 0.911 + ], + "angle": 0, + "content": "7.2.3 Future Directions. According to the scaling laws [102] and emergent abilities theory [220], as the model size continues to grow, the performance of DL-IDS will increase simultaneously. Thus, increasing the amount of model parameters will be an inevitable trend for DL-IDS. The underlying research questions include the strategies for incorporating existing LLMs in intrusion detection, since it is infeasible to directly leverage unreliable LLMs to detect intrusions, and the theories and techniques for modeling long and detailed log data. We summarize the future directions as follows:" + }, + { + "type": "footer", + "bbox": [ + 0.092, + 0.936, + 0.513, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.083, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.908, + 0.096 + ], + "angle": 0, + "content": "1:25" + }, + { + "type": "title", + "bbox": [ + 0.159, + 0.12, + 0.327, + 0.133 + ], + "angle": 0, + "content": "Future Directions" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.144, + 0.841, + 0.193 + ], + "angle": 0, + "content": "- Investigating how and where to introduce LLMs into DL-IDS like [165], with the objective of balancing the generalizability provided by LLMs and the reliability required by DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.194, + 0.842, + 0.245 + ], + "angle": 0, + "content": "- Exploring fundamental statistical modeling theories for log data. On this basis, designing pre-training frameworks for log data and its downstream tasks such as steps within the workflow of DL-IDS (see Section 3.2)." + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.144, + 0.842, + 0.245 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.266, + 0.548, + 0.282 + ], + "angle": 0, + "content": "7.3 Comprehensive Applications and Scenarios" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.287, + 0.911, + 0.337 + ], + "angle": 0, + "content": "DL-IDS possess abilities that the traditional IDS lack, or are difficult to realize, such as generalizability for zero-day attacks and modeling ability for complicated downstream tasks. We will elaborate on the possible new-style applications and discuss the challenges in and introduced by them." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.345, + 0.906, + 0.412 + ], + "angle": 0, + "content": "7.3.1 Limited Forward and Backward Tracing Scope. Forward tracing and backward tracing are employed in attack investigation, as illustrated in Section 5.3. Under traditional settings, the forward tracing analyzes the influence a symptom node would have on the victim computer system, and the backward tracing discovers the starting node where the vulnerabilities exist [270]." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.412, + 0.91, + 0.51 + ], + "angle": 0, + "content": "We argue that the existing tracing scope is too limited to handle increasingly complicated intrusions and DL-IDS can be defined more broadly. In addition to investigating scenario graphs of intrusions, DL-IDS are supposed to further investigate why these intrusions occur and how to hold back them. The broader definition introduces more downstream tasks that would be difficult to accomplish without the assistance of DL techniques. Based on Definition 3.3, we reformulate the definition of intrusion in a broad sense for DL-IDS as follows:" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.519, + 0.91, + 0.569 + ], + "angle": 0, + "content": "Definition 7.1. (Generalized Intrusion). Generalized intrusion is the malicious attempts against a computer, a network, or the corresponding security facilities, whose attributes encompass not only itself but also its underlying root causes and the relevant control measures." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.575, + 0.909, + 0.691 + ], + "angle": 0, + "content": "In this way, the detection of DL-IDS has been extended to the broadly defined intrusions, including their attributes of both root causes and control measures. When executing backward tracing analysis, DL-IDS are not only required to detect the starting symptom nodes of intrusions, but also required to find the root causes of these symptom nodes (i.e., vulnerabilities in source codes). In the forward tracing analysis, except for detecting the symptom nodes affected by intrusions, DL-IDS should perform an in-depth analysis to discover the potentially compromised nodes and provide control measures for handling intrusions." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.692, + 0.911, + 0.842 + ], + "angle": 0, + "content": "Thankfully, several pioneering works have studied similar problems [25, 144]. In AiVl [25], algorithms to bridge log entries and program models are developed using dynamic-static program analysis. Root causes for the exploited vulnerabilities are capable of directly deriving from intrusion detection. Pedro et al. [144] investigate detection and mitigation methods for DDoS attacks, aiming to control them immediately. Additionally, semi-automated adaptive network defense (SAND) [26] leverages SDN to dynamically generate and deploy defense rules. We note that these research attempts are all based on heuristics, either using pre-defined rules to generate root causes, or developing control measures for specific intrusions. Thus, there is a substantial need to introduce advanced DL techniques to this problem." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.849, + 0.912, + 0.916 + ], + "angle": 0, + "content": "7.3.2 Concerns about Data-driven Adversarial Attacks. To validate the detection performance, DL-IDS commonly idealize the experimental data in their threat model. Such idealization, however, leaves DL-IDS with weaknesses that could be exploited by invaders. For example, a common assumption is that no attacks are considered to compromise the security of the log collection" + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.908, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:26" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.083, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.118, + 0.907, + 0.184 + ], + "angle": 0, + "content": "systems [76, 79, 99, 182], namely log data utilized in DL-IDS is absolutely harmless. But as attacks become more stealthy and complicated, it is impossible to satisfy such an assumption apparently. When DL-IDS encounter intentional data poisoning attacks, prediction backdoors could be easily planted as persistent vulnerabilities." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.186, + 0.909, + 0.318 + ], + "angle": 0, + "content": "The robustness of DL-IDS is also challenged by data-driven evasion attacks. To evade the detection, the malicious behaviors usually mimic the benign ones (a.k.a., mimicry attacks), making them hard to be detected. By early 2002, David et al. [215] have indicated the danger of mimicry attacks on HIDS. Recently, researchers have started to investigate mimicry attacks on DL-IDS [64, 132, 161] and their studies all present effective evasion of detection. From a study [24], DL-IDS can be even plagued by a trivial perturbation in log data. Aware of this issue, R-caid [65] proposes to embed root causes into the detection model for countering adversarial attacks. However, as noted in recent work [64, 65, 161], data-driven attacks still remain a major challenge for DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.327, + 0.908, + 0.375 + ], + "angle": 0, + "content": "7.3.3 Underexplored Promising Scenarios. While DL-IDS show excellent performance in the protection of computer and network systems recently, there are still many promising scenarios for DL-IDS that have not been explored sufficiently." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.378, + 0.91, + 0.543 + ], + "angle": 0, + "content": "Mobile edge computing (MEC) [1, 117, 147] is a typical scenario. In the MEC environment, mobile computing, network control, and storage are pushed at the network edges so as to enable computation-intensive tasks at the resource-limited devices. At the network edges, devices such as Unmanned Aerial Vehicles (UAVs) and New Energy Vehicles (NEVs) usually lack computing power and security facilities, making it difficult to prevent them from intrusions [198]. In the meantime, containerized deployment has become one of the dominant ways to deploy microservices. Detecting intrusions on containers is thus of great importance, for which ReplicaWatcher [46] is a representative work with a special design for microservices. Additionally, industrial networks are characterized by high fidelity, stability, and real-time responsiveness [110], leading to challenges in adapting DL-IDS to their infrastructures." + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.552, + 0.908, + 0.603 + ], + "angle": 0, + "content": "7.3.4 Future Directions. Although there has been plenty of research on DL-IDS, many applications and scenarios remain underdeveloped. DL-IDS are sought to be more broadly defined and applied. Based on the above discussion, we briefly summarize the future directions as follows:" + }, + { + "type": "title", + "bbox": [ + 0.159, + 0.614, + 0.327, + 0.629 + ], + "angle": 0, + "content": "Future Directions" + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.639, + 0.838, + 0.685 + ], + "angle": 0, + "content": "- Extending the scope of forward tracing and backward tracing to intrusions in a broad sense, so that generating root causes and control measures for the broadly defined intrusions." + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.689, + 0.838, + 0.72 + ], + "angle": 0, + "content": "- Understanding data-driven adversarial attacks such as data poisoning attacks and mimicry attacks for devising more robust DL-IDS." + }, + { + "type": "text", + "bbox": [ + 0.16, + 0.722, + 0.838, + 0.754 + ], + "angle": 0, + "content": "- Applying DL-IDS widely in more underexplored promising scenarios, and if possible, implementing unified frameworks for them." + }, + { + "type": "list", + "bbox": [ + 0.16, + 0.639, + 0.838, + 0.754 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.779, + 0.265, + 0.793 + ], + "angle": 0, + "content": "8 CONCLUSION" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.8, + 0.907, + 0.915 + ], + "angle": 0, + "content": "The DL techniques bring reform to IDS, whose generalizability enables them to detect intrusions that have never been encountered before. Recognizing that the IDS development over the past decade primarily comes from DL-IDS, this survey revisits the common workflow for DL-IDS, elaborates each module in the workflow, and taxonomizes the research papers innovatively based on their DL techniques. Publicly available datasets for stimulating future research are introduced subsequently. In addition, from the perspective of DL, this survey digs deep into the potential challenges, emerging trends, and future directions for DL-IDS. The discussions suggest to us that" + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.515, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.09, + 0.084, + 0.505, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:27" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.119, + 0.908, + 0.152 + ], + "angle": 0, + "content": "DL-IDS are, fascinatingly, in an underdeveloped state. We hope that this survey can somewhat inspire current researchers and facilitate future investigations on DL-IDS." + }, + { + "type": "title", + "bbox": [ + 0.089, + 0.166, + 0.317, + 0.18 + ], + "angle": 0, + "content": "ACKNOWLEDGMENTS" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.187, + 0.867, + 0.203 + ], + "angle": 0, + "content": "This research is sponsored in part by the NSFC program (No. 6212780016 and No. 62021002)." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.216, + 0.225, + 0.23 + ], + "angle": 0, + "content": "REFERENCES" + }, + { + "type": "ref_text", + "bbox": [ + 0.108, + 0.236, + 0.911, + 0.264 + ], + "angle": 0, + "content": "[1] Nasir Abbas, Yan Zhang, Amir Taherkordi, and Tor Skeie. 2017. Mobile Edge Computing: A Survey. IEEE Internet of Things Journal 5, 1 (2017), 450-465." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.265, + 0.908, + 0.305 + ], + "angle": 0, + "content": "[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.306, + 0.907, + 0.333 + ], + "angle": 0, + "content": "[3] Amey Agrawal, Rohit Karlupia, and Rajat Gupta. 2019. Logan: A Distributed Online Log Parser. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering. IEEE, 1946-1951." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.334, + 0.907, + 0.375 + ], + "angle": 0, + "content": "[4] Zeeshan Ahmad, Adnan Shahid Khan, Cheah Wai Shiang, Johari Abdullah, and Farhan Ahmad. 2021. Network Intrusion Detection System: A Systematic Study of Machine Learning and Deep Learning Approaches. Transactions on Emerging Telecommunications Technologies 32, 1 (2021), e4150." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.376, + 0.907, + 0.416 + ], + "angle": 0, + "content": "[5] Farrukh Ahmed, Urooj Jahangir, Hamad Rahim, Kamran Ali, et al. 2020. Centralized Log Management Using Elasticsearch, Logstash and Kibana. In Proceedings of the 2020 International Conference on Information Science and Communication Technology. IEEE, 1-7." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.417, + 0.907, + 0.458 + ], + "angle": 0, + "content": "[6] Mohannad Alhanahnah, Shiqing Ma, Ashish Gehani, Gabriela F Ciocarlie, Vinod Yegneswaran, Somesh Jha, and Xiangyu Zhang. 2022. autoMPI: Automated Multiple Perspective Attack Investigation with Semantics Aware Execution Partitioning. IEEE Transactions on Software Engineering 49, 4 (2022), 2761-2775." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.459, + 0.907, + 0.485 + ], + "angle": 0, + "content": "[7] Tarek Ali. 2024. Next-Generation Intrusion Detection Systems with LLMs: Real-Time Anomaly Detection, Explainable AI, and Adaptive Data Generation. Master's thesis. T. Ali." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.486, + 0.907, + 0.513 + ], + "angle": 0, + "content": "[8] Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Computing Surveys 51, 4 (2018), 1-37." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.514, + 0.907, + 0.554 + ], + "angle": 0, + "content": "[9] Abdulellah Alsaheel, Yuhong Nan, Shiqing Ma, Le Yu, Gregory Walkup, Z Berkay Celik, Xiangyu Zhang, and Dongyan Xu. 2021. ATLAS: A Sequence-based Learning Approach for Attack Investigation. In Proceedings of the 30th USENIX Security Symposium. 3005-3022." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.555, + 0.907, + 0.596 + ], + "angle": 0, + "content": "[10] Adel Alshamrani, Sowmya Myneni, Ankur Chowdhary, and Dijiang Huang. 2019. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Communications Surveys and Tutorials 21, 2 (2019), 1851-1877. https://doi.org/10.1109/COMST.2019.2891891" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.597, + 0.907, + 0.638 + ], + "angle": 0, + "content": "[11] Enes Altinisik, Fatih Deniz, and Hürev Taha Sencar. 2023. ProvG-Searcher: A Graph Representation Learning Approach for Efficient Provenance Graph Search. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2247-2261." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.639, + 0.633, + 0.651 + ], + "angle": 0, + "content": "[12] Clarivate Analytics. 1997. Web of Science. https://www.webofscience.com" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.652, + 0.907, + 0.693 + ], + "angle": 0, + "content": "[13] Md Monowar Anjum, Shahrear Iqbal, and Benoit Hamelin. 2021. Analyzing the Usefulness of the DARPA OpTC Dataset in Cyber Threat Detection Research. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 27-32." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.694, + 0.907, + 0.72 + ], + "angle": 0, + "content": "[14] Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised Speech Recognition. Advances in Neural Information Processing Systems 34 (2021), 27826-27839." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.721, + 0.907, + 0.762 + ], + "angle": 0, + "content": "[15] Elizabeth Bautista, Nitin Sukhija, and Siqi Deng. 2022. Shasta Log Aggregation, Monitoring and Alerting in HPC Environments with Grafana Loki and ServiceNow. In Proceedings of the 2022 IEEE International Conference on Cluster Computing. IEEE, 602-610." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.763, + 0.907, + 0.803 + ], + "angle": 0, + "content": "[16] Jack Beerman, David Berent, Zach Falter, and Suman Bhunia. 2023. A Review of Colonial Pipeline Ransomware Attack. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops. IEEE, 8-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.804, + 0.907, + 0.831 + ], + "angle": 0, + "content": "[17] Tristan Bilot, Nour El Madhoun, Khaldoun Al Agha, and Anis Zouaoui. 2023. Graph Neural Networks for Intrusion Detection: A Survey. IEEE Access 11 (2023), 49114-49139." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.832, + 0.907, + 0.873 + ], + "angle": 0, + "content": "[18] Tristan Bilot, Baoxiang Jiang, Zefeng Li, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, and Thomas Pasquier. 2025. Sometimes Simpler is Better: A Comprehensive Analysis of State-of-the-Art Provenance-Based Intrusion Detection Systems. In 34th USENIX Security Symposium (USENIX Security 25). 7193-7212." + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.874, + 0.907, + 0.915 + ], + "angle": 0, + "content": "[19] Peter Bodik, Moises Goldszmidt, Armando Fox, Dawn B Woodard, and Hans Andersen. 2010. Fingerprinting the Datacenter: Automated Classification of Performance Crises. In Proceedings of the 5th European Conference on Computer Systems. 111-124." + }, + { + "type": "list", + "bbox": [ + 0.1, + 0.236, + 0.911, + 0.915 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.935, + 0.906, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:28" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.121, + 0.908, + 0.148 + ], + "angle": 0, + "content": "[20] Carolin E Brandt, Annibale Panichella, Andy Zaidman, and Moritz Beller. 2020. LogChunks: A Data Set for Build Log Analysis. In Proceedings of the 17th International Conference on Mining Software Repositories. 583-587." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.149, + 0.907, + 0.175 + ], + "angle": 0, + "content": "[21] Robert A Bridges, Tarrah R Glass-Vanderlan, Michael D Iannacone, Maria S Vincent, and Qian Chen. 2019. A Survey of Intrusion Detection Systems Leveraging Host Data. ACM computing surveys 52, 6 (2019), 1-35." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.177, + 0.908, + 0.217 + ], + "angle": 0, + "content": "[22] Dainius Čeponis and Nikolaj Goranin. 2018. Towards A Robust Method of Dataset Generation of Malicious Activity for Anomaly-Based HIDS Training and Presentation of AWSCTD Dataset. *Baltic Journal of Modern Computing* 6, 3 (2018), 217-234." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.218, + 0.907, + 0.246 + ], + "angle": 0, + "content": "[23] Xiaolin Chai, Hang Zhang, Jue Zhang, Yan Sun, and Sajal K Das. 2024. Log Sequence Anomaly Detection based on Template and Parameter Parsing via BERT. IEEE Transactions on Dependable and Secure Computing (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.246, + 0.908, + 0.273 + ], + "angle": 0, + "content": "[24] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. arXiv preprint arXiv:1810.00069 (2018)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.273, + 0.908, + 0.314 + ], + "angle": 0, + "content": "[25] Changhua Chen, Tingzhen Yan, Chenxuan Shi, Hao Xi, Zhirui Fan, Hai Wan, and Xibin Zhao. 2024. The Last Mile of Attack Investigation: Audit Log Analysis towards Software Vulnerability Location. IEEE Transactions on Information Forensics and Security (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.315, + 0.908, + 0.342 + ], + "angle": 0, + "content": "[26] Haoyu Chen, Deqing Zou, Hai Jin, Shouhuai Xu, and Bin Yuan. 2022. SAND: Semi-Automated Adaptive Network Defense via Programmable Rule Generation and Deployment. Science China Information Sciences 65, 7 (2022), 172102." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.343, + 0.907, + 0.37 + ], + "angle": 0, + "content": "[27] Tao Chen, Haiyan Suo, and Wenqian Xu. 2023. Design of Log Collection Architecture Based on Cloud Native Technology. In Proceedings of the 2023 4th Information Communication Technologies Conference. IEEE, 311-315." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.371, + 0.908, + 0.411 + ], + "angle": 0, + "content": "[28] Wenrui Cheng, Qixuan Yuan, Tiantian Zhu, Tieming Chen, Jie Ying, Aohan Zheng, Mingjun Ma, Chunlin Xiong, Mingqi Lv, and Yan Chen. 2025. TAGAPT: Towards Automatic Generation of APT Samples with Provenance-level Granularity. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.412, + 0.908, + 0.452 + ], + "angle": 0, + "content": "[29] Zijun Cheng, Qiujian Lv, Jinyuan Liang, Yan Wang, Degang Sun, Thomas Pasquier, and Xueyuan Han. 2024. Kairos: Practical Intrusion Detection and Investigation Using Whole-System Provenance. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3533–3551." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.453, + 0.908, + 0.493 + ], + "angle": 0, + "content": "[30] Guojun Chu, Jingyu Wang, Qi Qi, Haifeng Sun, Shimin Tao, and Jianxin Liao. 2021. Prefix-Graph: A Versatile Log Parsing Approach Merging Prefix Tree with Probabilistic Graph. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering. IEEE, 2411-2422." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.494, + 0.884, + 0.508 + ], + "angle": 0, + "content": "[31] The MITRE Corporation. 2025. CVE List. https://github.com/CVEProject/cvelistV5/archive/refs/heads/main.zip" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.509, + 0.907, + 0.536 + ], + "angle": 0, + "content": "[32] Oihana Coustie, Josiane Mothe, Olivier Teste, and Xavier Baril. 2020. METING: A Robust Log Parser Based on Frequent n-Gram Mining. In Proceedings of the 2020 IEEE International Conference on Web Services. IEEE, 84-88." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.537, + 0.908, + 0.577 + ], + "angle": 0, + "content": "[33] Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, and Xiaojing Liao. 2024. Tweezers: A Framework for Security Event Detection via Event Attribution-centric Tweet Embedding. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.578, + 0.908, + 0.618 + ], + "angle": 0, + "content": "[34] Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, and Hugh Leather. 2025. LLM Compiler: Foundation Language Models for Compiler Optimization. In Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction. 141-153." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.619, + 0.908, + 0.646 + ], + "angle": 0, + "content": "[35] Hetong Dai, Heng Li, Che-Shao Chen, Weiyi Shang, and Tse-Hsun Chen. 2020. Logram: Efficient Log Parsing Using \\( n \\) -Gram Dictionaries. IEEE Transactions on Software Engineering 48, 3 (2020), 879-892." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.647, + 0.908, + 0.688 + ], + "angle": 0, + "content": "[36] Hetong Dai, Yiming Tang, Heng Li, and Weiyi Shang. 2023. PILAR: Studying and Mitigating the Influence of Configurations on Log Parsing. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 818-829." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.689, + 0.825, + 0.702 + ], + "angle": 0, + "content": "[37] DARPA. 2019. Operationally Transparent Cyber Dataset. https://github.com/FiveDirections/OpTC-data" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.702, + 0.908, + 0.73 + ], + "angle": 0, + "content": "[38] DARPA. 2022. The DARPA Transparent Computing (TC) program Data Release. https://github.com/darpa-i2o/Transparent-Computing" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.73, + 0.908, + 0.771 + ], + "angle": 0, + "content": "[39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.772, + 0.908, + 0.799 + ], + "angle": 0, + "content": "[40] Hailun Ding, Juan Zhai, Dong Deng, and Shiqing Ma. 2023. The Case for Learned Provenance Graph Storage Systems. In Proceedings of the 32nd USENIX Security Symposium. 3277-3294." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.799, + 0.908, + 0.827 + ], + "angle": 0, + "content": "[41] Hailun Ding, Juan Zhai, Yuhong Nan, and Shiqing Ma. 2023. AirTag: Towards Automated Attack Investigation by Unsupervised Learning with Log Texts. In Proceedings of the 32nd USENIX Security Symposium. 373-390." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.828, + 0.908, + 0.867 + ], + "angle": 0, + "content": "[42] Feng Dong, Liu Wang, Xu Nie, Fei Shao, Haoyu Wang, Ding Li, Xiapu Luo, and Xusheng Xiao. 2023. DistDet: A Cost-Effective Distributed Cyber Threat Detection System. In Proceedings of the 32nd USENIX Security Symposium. 6575–6592." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.868, + 0.907, + 0.896 + ], + "angle": 0, + "content": "[43] Ying Dong, Yuqing Zhang, Hua Ma, Qianru Wu, Qixu Liu, Kai Wang, and Wenjie Wang. 2018. An Adaptive System for Detecting Malicious Queries in Web Attacks. Science China Information Sciences 61, 3 (2018), 032114." + }, + { + "type": "list", + "bbox": [ + 0.099, + 0.121, + 0.908, + 0.896 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.089, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:29" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.12, + 0.908, + 0.148 + ], + "angle": 0, + "content": "[44] Min Du and Feifei Li. 2016. Spell: Streaming Parsing of System Event Logs. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining. IEEE, 859-864." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.149, + 0.908, + 0.189 + ], + "angle": 0, + "content": "[45] Min Du, Feifei Li, Guineng Zheng, and Vivek Srikumar. 2017. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1285-1298." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.19, + 0.908, + 0.232 + ], + "angle": 0, + "content": "[46] Asbat El Khairi, Marco Caselli, Andreas Peter, and Andrea Continella. 2024. REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.233, + 0.74, + 0.245 + ], + "angle": 0, + "content": "[47] Elastic. 2009. Logstash: Collect, parse, and transform logs. https://www.elastic.co/logstash/" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.246, + 0.897, + 0.259 + ], + "angle": 0, + "content": "[48] Elastic. 2010. Elasticsearch: The official distributed search & analytics engine. https://www.elastic.co/elasticsearch/" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.26, + 0.739, + 0.273 + ], + "angle": 0, + "content": "[49] Elastic. 2013. Kibana: Explore, visualize, and discover data. https://www.elastic.co/kibana/" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.274, + 0.719, + 0.287 + ], + "angle": 0, + "content": "[50] Elsevier. 2021. Scopus. https://www.scopus.com/search/form.uri?display=basic{\\#}basic" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.288, + 0.908, + 0.314 + ], + "angle": 0, + "content": "[51] Dave Evans. 2012. The Internet of Everything: How More Relevant and Valuable Connections will Change the World. Cisco IBSG 2012 (2012), 1-9." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.315, + 0.908, + 0.355 + ], + "angle": 0, + "content": "[52] Pengcheng Fang, Peng Gao, Changlin Liu, Erman Ayday, Kangkook Jee, Ting Wang, Yanfang Fanny Ye, Zhuotao Liu, and Xusheng Xiao. 2022. Back-Propagating System Dependency Impact for Attack Investigation. In Proceedings of the 31st USENIX Security Symposium. 2461–2478." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.357, + 0.907, + 0.384 + ], + "angle": 0, + "content": "[53] Peng Fei, Zhou Li, Zhiying Wang, Xiao Yu, Ding Li, and Kangkook Jee. 2021. SEAL: Storage-Efficient Causality Analysis on Enterprise Logs with Query-Friendly Compression. In Proceedings of the 30th USENIX Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.385, + 0.908, + 0.425 + ], + "angle": 0, + "content": "[54] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1536-1547." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.426, + 0.834, + 0.44 + ], + "angle": 0, + "content": "[55] Free Software Foundation. 1992. gzip: GNU zip compression utility. https://www.gnu.org/software/gzip/" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.44, + 0.907, + 0.466 + ], + "angle": 0, + "content": "[56] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2021. Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 3431-3446." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.468, + 0.908, + 0.508 + ], + "angle": 0, + "content": "[57] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2024. Detecting Tunnelled Flooding Traffic via Deep Semantic Analysis of Packet Length Patterns. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3659-3673." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.509, + 0.908, + 0.55 + ], + "angle": 0, + "content": "[58] Chuanpu Fu, Qi Li, Ke Xu, and Jianping Wu. 2023. Point Cloud Analysis for ML-based Malicious Traffic Detection: Reducing Majorities of False Positive Alarms. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 1005-1019." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.551, + 0.908, + 0.592 + ], + "angle": 0, + "content": "[59] Oscar G. Lira, Alberto Marroquin, and Marco Antonio To. 2024. Harnessing the Advanced Capabilities of LLM for Adaptive Intrusion Detection Systems. In Proceedings of the International Conference on Advanced Information Networking and Applications. Springer, 453-464." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.592, + 0.908, + 0.632 + ], + "angle": 0, + "content": "[60] Peng Gao, Xusheng Xiao, Zhichun Li, Fengyuan Xu, Sanjeev R Kulkarni, and Prateek Mittal. 2018. AIQL: Enabling Efficient Attack Investigation from System Monitoring Data. In Proceedings of the 2018 USENIX Annual Technical Conference. 113-126." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.633, + 0.908, + 0.674 + ], + "angle": 0, + "content": "[61] Ashish Gehani and Dawood Tariq. 2012. SPADE: Support for Provenance Auditing in Distributed Environments. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing. Springer, 101-120." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.675, + 0.908, + 0.716 + ], + "angle": 0, + "content": "[62] Jalal Ghadermazi, Soumyadeep Hore, Ankit Shah, and Nathaniel D Bastian. 2025. GTAE-IDS: Graph Transformer-Based Autoencoder Framework for Real-Time Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.717, + 0.908, + 0.744 + ], + "angle": 0, + "content": "[63] Joshua Glasser and Brian Lindauer. 2013. Bridging the gap: A Pragmatic Approach to Generating Insider Threat Data. In Proceedings of the IEEE Symposium on Security and Privacy Workshops. IEEE, 98-104." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.745, + 0.908, + 0.785 + ], + "angle": 0, + "content": "[64] Akul Goyal, Xueyuan Han, Gang Wang, and Adam Bates. 2023. Sometimes, You Aren't What You Do: Mimicry Attacks Against Provenance Graph Host Intrusion Detection Systems. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.786, + 0.907, + 0.813 + ], + "angle": 0, + "content": "[65] Akul Goyal, Gang Wang, and Adam Bates. 2024. R-caid: Embedding Root Cause Analysis within Provenance-Based Intrusion Detection. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3515-3532." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.814, + 0.907, + 0.839 + ], + "angle": 0, + "content": "[66] Brendan Gregg and Jim Mauro. 2011. DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD. Prentice Hall Professional." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.84, + 0.908, + 0.895 + ], + "angle": 0, + "content": "[67] John Griffith, Derrick Kong, Armando Caro, Brett Benyo, Joud Khoury, Timothy Upthegrove, Timothy Christovich, Stanislav Ponomorov, Ali Sydney, Arjun Saini, et al. 2020. Scalable Transparency Architecture for Research Collaboration (STARC)-DARPA Transparent Computing (TC) Program. *Raytheon BBN Technologies Corporation Cambridge United States* (2020)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.896, + 0.624, + 0.911 + ], + "angle": 0, + "content": "[68] Steve Grubb. 2008. Linux audit. https://people.redhat.com/sgrubb/audit/" + }, + { + "type": "list", + "bbox": [ + 0.099, + 0.12, + 0.908, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:30" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.12, + 0.91, + 0.16 + ], + "angle": 0, + "content": "[69] Qiuhan Gu. 2023. LLM-Based Code Generation Method for Golang Compiler Testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2201-2203." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.161, + 0.908, + 0.204 + ], + "angle": 0, + "content": "[70] Xiaodong Gu, Meng Chen, Yalan Lin, Yuhan Hu, Hongyu Zhang, Chengcheng Wan, Zhao Wei, Yong Xu, and Juhong Wang. 2025. On the Effectiveness of Large Language Models in Domain-Specific Code Generation. ACM Transactions on Software Engineering and Methodology 34, 3 (2025), 1-22." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.205, + 0.908, + 0.245 + ], + "angle": 0, + "content": "[71] Yiwei Guo, Chenpeng Du, Ziyang Ma, Xie Chen, and Kai Yu. 2024. Voiceflow: Efficient Text-to-Speech with Rectified Flow Matching. In Proceedings of the ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 11121-11125." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.246, + 0.906, + 0.273 + ], + "angle": 0, + "content": "[72] Yi Guo, Fu Miao, Liancheng Zhang, and Yu Wang. 2019. CATH: An Effective Method for Detecting Denial-of-Service Attacks in Software Defined Networks. Science China Information Sciences 62, 3 (2019), 32106." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.273, + 0.906, + 0.3 + ], + "angle": 0, + "content": "[73] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. Advances in Neural Information Processing Systems 30 (2017)." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.301, + 0.908, + 0.342 + ], + "angle": 0, + "content": "[74] Hossein Hamooni, Biplob Debnath, Jianwu Xu, Hui Zhang, Guofei Jiang, and Abdullah Mueen. 2016. LogMine: Fast Pattern Recognition for Log Analytics. In Proceedings of the ACM International on Conference on Information and Knowledge Management. 1573-1582." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.343, + 0.906, + 0.384 + ], + "angle": 0, + "content": "[75] Dongqi Han, Zhiliang Wang, Wenqi Chen, Kai Wang, Rui Yu, Su Wang, Han Zhang, Zhihua Wang, Minghui Jin, Jiahai Yang, et al. 2023. Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation. In Proceedings of the Network and Distributed Systems Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.384, + 0.908, + 0.425 + ], + "angle": 0, + "content": "[76] Xueyuan Han, Thomas Pasquier, Adam Bates, James Mickens, and Margo Seltzer. 2020. *Unicorn: Runtime Provenance-Based Detector for Advanced Persistent Threats*. In *Proceedings of the Network and Distributed Systems Security Symposium*." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.426, + 0.908, + 0.467 + ], + "angle": 0, + "content": "[77] Wajih Ul Hassan, Lemay Aguse, Nuraini Aguse, Adam Bates, and Thomas Moyer. 2018. Towards Scalable Cluster Auditing through Grammatical Inference over Provenance Graphs. In Proceedings of the Network and Distributed Systems Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.468, + 0.906, + 0.494 + ], + "angle": 0, + "content": "[78] Wajih Ul Hassan, Adam Bates, and Daniel Marino. 2020. Tactical Provenance Analysis for Endpoint Detection and Response Systems. In Proceedings of the 2020 IEEE symposium on security and privacy. IEEE, 1172-1189." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.495, + 0.908, + 0.536 + ], + "angle": 0, + "content": "[79] Wajih Ul Hassan, Shengjian Guo, Ding Li, Zhengzhang Chen, Kangkook Jee, Zhichun Li, and Adam Bates. 2019. Nodoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.536, + 0.908, + 0.577 + ], + "angle": 0, + "content": "[80] Wajih Ul Hassan, Mohammad Ali Noureddine, Pubali Datta, and Adam Bates. 2020. OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-Layer Log Analysis. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.578, + 0.908, + 0.617 + ], + "angle": 0, + "content": "[81] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. 2022. Masked Autoencoders are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000-16009." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.618, + 0.908, + 0.646 + ], + "angle": 0, + "content": "[82] Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R Lyu. 2017. Drain: An Online Log Parsing Approach with Fixed Depth Tree. In Proceedings of the 2017 IEEE International Conference on Web Services. IEEE, 33-40." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.647, + 0.908, + 0.688 + ], + "angle": 0, + "content": "[83] Shilin He, Pinjia He, Zhuangbin Chen, Tianyi Yang, Yuxin Su, and Michael R. Lyu. 2020. A Survey on Automated Log Analysis for Reliability Engineering. ACM Computing Surveys 54 (2020), 1 - 37. https://api-semanticscholar.org/CorpusID:221703032" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.689, + 0.908, + 0.729 + ], + "angle": 0, + "content": "[84] Xinlei He, Guowen Xu, Xingshuo Han, Qian Wang, Lingchen Zhao, Chao Shen, Chenhao Lin, Zhengyu Zhao, Qian Li, Le Yang, et al. 2025. Artificial intelligence security and privacy: a survey. Science China Information Sciences 68, 8 (2025), 1-90." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.729, + 0.908, + 0.757 + ], + "angle": 0, + "content": "[85] Abram Hindle, Earl T Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the Naturalness of Software. Commun. ACM 59, 5 (2016), 122-131." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.758, + 0.906, + 0.771 + ], + "angle": 0, + "content": "[86] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735-1780." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.772, + 0.908, + 0.812 + ], + "angle": 0, + "content": "[87] Josef Horalek, Patrik Urbanik, Vladimir Sobeslav, and Tomas Svoboda. 2022. Proposed Solution for Log Collection and Analysis in Kubernetes Environment. In Proceedings of the International Conference on Nature of Computation and Communication. Springer, 9-22." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.813, + 0.909, + 0.854 + ], + "angle": 0, + "content": "[88] Md Nahid Hossain, Sadegh M Milajerdi, Junao Wang, Birhanu Eshete, Rigel Gjomemo, R Sekar, Scott Stoller, and VN Venkatakrishnan. 2017. Sleuth: Real-time Attack Scenario Reconstruction from COTS Audit Data. In Proceedings of the USENIX Security Symposium. 487-504." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.855, + 0.908, + 0.896 + ], + "angle": 0, + "content": "[89] Md Nahid Hossain, Junao Wang, Ofir Weisse, R Sekar, Daniel Genkin, Boyuan He, Scott D Stoller, Gan Fang, Frank Piessens, Evan Downing, et al. 2018. Dependence-Preserving Data Compaction for Scalable Forensic Analysis. In Proceedings of the 27th USENIX Security Symposium. 1723-1740." + }, + { + "type": "list", + "bbox": [ + 0.099, + 0.12, + 0.91, + 0.896 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.089, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.096 + ], + "angle": 0, + "content": "1:31" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.12, + 0.909, + 0.161 + ], + "angle": 0, + "content": "[90] Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. 2022. GraphMAE: Self-Supervised Masked Graph Autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594-604." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.162, + 0.91, + 0.215 + ], + "angle": 0, + "content": "[91] Kevin Hsieh, Mike Wong, Santiago Segarra, Sathiya Kumaran Mani, Trevor Eberl, Anatoliy Panasyuk, Ravi Netravali, Ranveer Chandra, and Srikanth Kandula. 2024. NetVigil: Robust and Low-Cost Anomaly Detection for East-West Data Center Security. In Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation. 1771-1789." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.218, + 0.91, + 0.245 + ], + "angle": 0, + "content": "[92] Peiwei Hu, Ruigang Liang, and Kai Chen. 2024. DeGPT: Optimizing Decompile Output with LLM. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.246, + 0.908, + 0.285 + ], + "angle": 0, + "content": "[93] Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, and Yinzhi Cao. 2024. Pleak: Prompt Leaking Attacks Against Large Language Model Applications. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3600-3614." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.287, + 0.909, + 0.327 + ], + "angle": 0, + "content": "[94] Yintong Huo, Yichen Li, Yuxin Su, Pinjia He, Zifan Xie, and Michael R Lyu. 2023. AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection. In Proceedings of the 2023 38th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 497-509." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.329, + 0.586, + 0.342 + ], + "angle": 0, + "content": "[95] IEEE. 2000. IEEE Xplore Digital Library. https://ieeexplore.ieee.org" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.343, + 0.91, + 0.395 + ], + "angle": 0, + "content": "[96] Muhammad Adil Inam, Yinfang Chen, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih Ul Hassan. 2023. SoK: History is a Vast Early Warning System: Auditing the Provenance of System Intrusions. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. 2620-2638. https://doi.org/10.1109/SP46215.2023.10179405" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.397, + 0.909, + 0.438 + ], + "angle": 0, + "content": "[97] Muhammad Adil Inam, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih UI Hassan. 2022. FAuST: Striking A Bargain between Forensic Auditing's Security and Throughput. In Proceedings of the 38th Annual Computer Security Applications Conference. 813-826." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.439, + 0.909, + 0.481 + ], + "angle": 0, + "content": "[98] Yang Ji, Sangho Lee, Evan Downing, Weiren Wang, Mattia Fazzini, Taesoo Kim, Alessandro Orso, and Wenke Lee. 2017. Rain: Refinable Attack Investigation with On-demand Inter-Process Information Flow Tracking. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 377–390." + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.481, + 0.909, + 0.508 + ], + "angle": 0, + "content": "[99] Zian Jia, Yun Xiong, Yuhong Nan, Yao Zhang, Jinjing Zhao, and Mi Wen. 2024. MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning. In Proceedings of the 33rd USENIX Security Symposium. 5197-5214." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.509, + 0.907, + 0.55 + ], + "angle": 0, + "content": "[100] Baoxiang Jiang, T Bilot, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, Shahrear Iqbal, Xueyuan Han, and Thomas Pasquier. 2025. Orthrus: Achieving High Quality of Attribution in Provenance-based Intrusion Detection Systems. In Proceedings of the USENIX Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.55, + 0.907, + 0.577 + ], + "angle": 0, + "content": "[101] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410 (2016)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.578, + 0.907, + 0.618 + ], + "angle": 0, + "content": "[102] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 (2020)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.619, + 0.907, + 0.646 + ], + "angle": 0, + "content": "[103] Alexander D. Kent. 2015. Comprehensive, Multi-Source Cyber-Security Events. Los Alamos National Laboratory. https://doi.org/10.17021/1179829" + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.647, + 0.907, + 0.674 + ], + "angle": 0, + "content": "[104] LG Kersta, PD Bricker, and EE David Jr. 1960. Human or Machine?—A Study of Voice Naturalness. The Journal of the Acoustical Society of America 32, 11_Supplement (1960), 1502-1502." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.675, + 0.907, + 0.702 + ], + "angle": 0, + "content": "[105] Ansam Khraisat, Iqbal Gondal, Peter Vamplew, and Joarder Kamruzzaman. 2019. Survey of Intrusion Detection Systems: Techniques, Datasets and Challenges. Cybersecurity 2, 1 (2019), 1-22." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.702, + 0.773, + 0.715 + ], + "angle": 0, + "content": "[106] Aaron Kili. [n.d.]. Sysdig-A Powerful System Monitoring and Troubleshooting Tool for Linux." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.716, + 0.907, + 0.757 + ], + "angle": 0, + "content": "[107] Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. \"I'm Not Sure, But...\": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 822-835." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.758, + 0.907, + 0.785 + ], + "angle": 0, + "content": "[108] Isaiah J King and H Howie Huang. 2023. Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction. ACM Transactions on Privacy and Security 26, 3 (2023), 1-36." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.786, + 0.895, + 0.799 + ], + "angle": 0, + "content": "[109] Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv preprint arXiv:1611.07308 (2016)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.8, + 0.907, + 0.826 + ], + "angle": 0, + "content": "[110] Eric D Knapp. 2024. Industrial Network Security: Securing Critical Infrastructure Networks for Smart Grid, SCADA, and other Industrial Control Systems. Elsevier." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.827, + 0.907, + 0.868 + ], + "angle": 0, + "content": "[111] Yonghwi Kwon, Fei Wang, Weihang Wang, Kyu Hyung Lee, Wen-Chuan Lee, Shiqing Ma, Xiangyu Zhang, Dongyan Xu, Somesh Jha, Gabriela Ciocarlie, et al. 2018. MCI: Modeling-based Causality Inference in Audit Logging for Attack Investigation. In Proceedings of the Network and Distributed Systems Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.869, + 0.702, + 0.882 + ], + "angle": 0, + "content": "[112] Grafana Labs. 2014. Grafana: The Open Observability Platform. https://grafana.com/" + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.883, + 0.907, + 0.91 + ], + "angle": 0, + "content": "[113] Van-Hoang Le and Hongyu Zhang. 2021. Log-Based Anomaly Detection without Log Parsing. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 492-504." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.91, + 0.91 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:32" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.148 + ], + "angle": 0, + "content": "[114] Van-Hoang Le and Hongyu Zhang. 2023. Log Parsing with Prompt-Based Few-Shot Learning. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 2438-2449." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.149, + 0.907, + 0.175 + ], + "angle": 0, + "content": "[115] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. High Accuracy Attack Provenance via Binary-based Execution Partition. In Proceedings of the Network and Distributed System Security Symposium, Vol. 16." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.177, + 0.907, + 0.203 + ], + "angle": 0, + "content": "[116] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. LogGC: Garbage Collecting Audit Log. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security. 1005-1016." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.204, + 0.908, + 0.245 + ], + "angle": 0, + "content": "[117] Huanruo Li, Yunfei Guo, Shumin Huo, Hongchao Hu, and Penghao Sun. 2022. Defensive Deception Framework Against Reconnaissance Attacks in the Cloud with Deep Reinforcement Learning. Science China Information Sciences 65, 7 (2022), 170305." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.246, + 0.907, + 0.273 + ], + "angle": 0, + "content": "[118] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ConLBS: An Attack Investigation Approach Using Contrastive Learning with Behavior Sequence. Sensors 23, 24 (2023), 9881." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.274, + 0.907, + 0.3 + ], + "angle": 0, + "content": "[119] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ProvGRP: A Context-Aware Provenance Graph Reduction and Partition Approach for Facilitating Attack Investigation. *Electronics* 13, 1 (2023), 100." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.301, + 0.908, + 0.342 + ], + "angle": 0, + "content": "[120] Shaofei Li, Feng Dong, Xusheng Xiao, Haoyu Wang, Fei Shao, Jiedong Chen, Yao Guo, Xiangqun Chen, and Ding Li. 2024. NodLink: An Online System for Fine-Grained APT Attack Detection and Investigation. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.343, + 0.907, + 0.37 + ], + "angle": 0, + "content": "[121] Teng Li, Jianfeng Ma, and Cong Sun. 2017. NetPro: Detecting Attacks in MANET Routing with Provenance and Verification. Science China Information Sciences 60, 11 (2017), 118101." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.371, + 0.907, + 0.398 + ], + "angle": 0, + "content": "[122] Xiaoxiang Li, Xinyu Jiang, Hai Wan, and Xinbin Zhao. 2025. TeRed: Normal Behavior-Based Efficient Provenance Graph Reduction for Large-Scale Attack Forensics. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.399, + 0.908, + 0.439 + ], + "angle": 0, + "content": "[123] Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, and Pengfei Chen. 2024. LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1-12." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.44, + 0.908, + 0.48 + ], + "angle": 0, + "content": "[124] Yujia Li, David Choi, Junyoung Chung, Nate Kushner, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-Level Code Generation with Alphacode. Science 378, 6624 (2022), 1092-1097." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.481, + 0.907, + 0.508 + ], + "angle": 0, + "content": "[125] Yanjie Li, Zhen Xiang, Nathaniel D Bastian, Dawn Song, and Bo Li. 2024. IDS-Agent: An LLM Agent for Explanable Intrusion Detection in IoT Networks. In Proceedings of the NeurIPS 2024 Workshop on Open-World Agents." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.509, + 0.908, + 0.55 + ], + "angle": 0, + "content": "[126] Yuanlin Li, Zhiwei Xu, Min Zhou, Hai Wan, and Xibin Zhao. 2024. Trident: Detecting SQL Injection Attacks via Abstract Syntax Tree-based Neural Network. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 2225-2229." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.551, + 0.908, + 0.591 + ], + "angle": 0, + "content": "[127] Zhenyuan Li, Qi Alfred Chen, Runqing Yang, Yan Chen, and Wei Ruan. 2021. Threat Detection and Investigation with System-Level Provenance Graphs: A Survey. Computer and Security 106, C (jul 2021), 16 pages. https://doi.org/10.1016/j.cose.2021.102282" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.592, + 0.908, + 0.619 + ], + "angle": 0, + "content": "[128] Hung-Jen Liao, Chun-Hung Richard Lin, Ying-Chih Lin, and Kuang-Yuan Tung. 2013. Intrusion Detection System: A Comprehensive Review. Journal of Network and Computer Applications 36, 1 (2013), 16-24." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.62, + 0.908, + 0.647 + ], + "angle": 0, + "content": "[129] Soo Yee Lim, Bogdan Stelea, Xueyuan Han, and Thomas Pasquier. 2021. Secure Namespaced Kernel Audit for Containers. In Proceedings of the ACM Symposium on Cloud Computing. 518-532." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.648, + 0.908, + 0.688 + ], + "angle": 0, + "content": "[130] Qingwei Lin, Hongyu Zhang, Jian-Guang Lou, Yu Zhang, and Xuewei Chen. 2016. Log Clustering Based Problem Identification for Online Service Systems. In Proceedings of the International Conference on Software Engineering Companion. 102-111." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.689, + 0.801, + 0.702 + ], + "angle": 0, + "content": "[131] Brian Lindauer. 2020. Insider Threat Test Dataset. (9 2020). https://doi.org/10.1184/R1/12841247.v1" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.703, + 0.908, + 0.743 + ], + "angle": 0, + "content": "[132] Guangrui Liu, Weizhe Zhang, Xinjie Li, Kaisheng Fan, and Shui Yu. 2022. VulnERGAN: A Backdoor Attack through Vulnerability Amplification against Machine Learning-Based Network Intrusion Detection Systems. Science China Information Sciences 65, 7 (2022), 170303." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.744, + 0.908, + 0.785 + ], + "angle": 0, + "content": "[133] Jason Liu, Muhammad Adil Inam, Akul Goyal, Andy Riddle, Kim Westfall, and Adam Bates. 2025. What We Talk About When We Talk About Logs: Understanding the Effects of Dataset Quality on Endpoint Threat Detection Research. In Proceedings of the 2025 IEEE Symposium on Security and Privacy. IEEE, 112-129." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.786, + 0.908, + 0.826 + ], + "angle": 0, + "content": "[134] Jian Liu, Junjie Yan, Zhengwei Jiang, Xuren Wang, and Jun Jiang. 2022. A Graph Learning Approach with Audit Records for Advanced Attack Investigation. In Proceedings of the IEEE Global Communications Conference. IEEE, 897-902." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.827, + 0.908, + 0.868 + ], + "angle": 0, + "content": "[135] Jinyang Liu, Jieming Zhu, Shilin He, Pinjia He, Zibin Zheng, and Michael R Lyu. 2019. Logzip: Extracting Hidden Structures via Iterative Clustering for Log Compression. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 863-873." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.869, + 0.907, + 0.896 + ], + "angle": 0, + "content": "[136] Shuai Liu, Yiheng Pan, Kun Hong, Ruite Fei, Chenhao Lin, Qian Li, and Chao Shen. 2025. Backdoor Threats in Large Language Models—A Survey. Science China Information Sciences 68, 9 (2025), 1-34." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.908, + 0.896 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:33" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.161 + ], + "angle": 0, + "content": "[137] Yudong Liu, Xu Zhang, Shilin He, Hongyu Zhang, Liquin Li, Yu Kang, Yong Xu, Minghua Ma, Qingwei Lin, Yingnong Dang, et al. 2022. UniParser: A Unified Log Parser for Heterogeneous Log Data. In Proceedings of the ACM Web Conference. 1893-1901." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.162, + 0.908, + 0.204 + ], + "angle": 0, + "content": "[138] Scott Lupton, Hironori Washizaki, Nobukazu Yoshioka, and Yoshiaki Fukazawa. 2021. Literature Review on Log Anomaly Detection Approaches Utilizing Online Parsing Methodology. In Proceedings of the 2021 28th Asia-Pacific Software Engineering Conference. 559-563. https://doi.org/10.1109/APSEC53868.2021.00068" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.205, + 0.908, + 0.245 + ], + "angle": 0, + "content": "[139] Mingqi Lv, HongZhe Gao, Xuebo Qiu, Tieming Chen, Tiantian Zhu, Jinyin Chen, and Shouling Ji. 2024. TREC: APT Tactic/Technique Recognition via Few-Shot Provenance Subgraph Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 139-152." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.246, + 0.908, + 0.286 + ], + "angle": 0, + "content": "[140] Yang Lv, Shaona Qin, Zifeng Zhu, Zhuocheng Yu, Shudong Li, and Weihong Han. 2022. A Review of Provenance Graph based APT Attack Detection: Applications and Developments. In Proceedings of the 2022 7th IEEE International Conference on Data Science in Cyberspace. 498-505. https://doi.org/10.1109/DSC55868.2022.00075" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.287, + 0.907, + 0.328 + ], + "angle": 0, + "content": "[141] Shiqing Ma, Juan Zhai, Yonghwi Kwon, Kyu Hyung Lee, Xiangyu Zhang, Gabriela Ciocarlie, Ashish Gehani, Vinod Yegneswaran, Dongyan Xu, and Somesh Jha. 2018. Kernel-Supported Cost-Effective Audit Logging for Causality Tracking. In Proceedings of the 2018 USENIX Annual Technical Conference. 241-254." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.329, + 0.908, + 0.37 + ], + "angle": 0, + "content": "[142] Shiqing Ma, Juan Zhai, Fei Wang, Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2017. MPI: Multiple Perspective Attack Investigation with Semantic Aware Execution Partitioning. In Proceedings of the 26th USENIX Security Symposium. 1111-1128." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.371, + 0.907, + 0.398 + ], + "angle": 0, + "content": "[143] Shiqing Ma, Xiangyu Zhang, and Dongyan Xu. 2016. ProTracer: Towards Practical Provenance Tracing by Alternating between Logging and Tainting. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.399, + 0.907, + 0.425 + ], + "angle": 0, + "content": "[144] Pedro Manso, José Moura, and Carlos Serrão. 2019. SDN-Based Intrusion Detection System for Early Detection and Mitigation of DDoS Attacks. Information 10, 3 (2019), 106." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.426, + 0.908, + 0.467 + ], + "angle": 0, + "content": "[145] Emaad Manzoor, Sadegh M Milajerdi, and Leman Akoglu. 2016. Fast Memory-Efficient Anomaly Detection in Streaming Heterogeneous Graphs. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1035-1044." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.468, + 0.907, + 0.509 + ], + "angle": 0, + "content": "[146] Qinghua Mao, Xi Lin, Wenchao Xu, Yuxin Qi, Xiu Su, Gaolei Li, and Jianhua Li. 2025. FeCoGraph: Label-Aware Federated Graph Contrastive Learning for Few-Shot Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.509, + 0.907, + 0.536 + ], + "angle": 0, + "content": "[147] Yuyi Mao, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B Letaief. 2017. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Communications Surveys and Tutorials 19, 4 (2017), 2322-2358." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.537, + 0.908, + 0.564 + ], + "angle": 0, + "content": "[148] Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building A Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19, 2 (1993), 313-330." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.565, + 0.907, + 0.592 + ], + "angle": 0, + "content": "[149] Ariana Martino, Michael Iannelli, and Coleen Truong. 2023. Knowledge Injection to Counter Large Language Model (LLM) Hallucination. In European Semantic Web Conference. Springer, 182-185." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.592, + 0.908, + 0.632 + ], + "angle": 0, + "content": "[150] Ines Martins, Joao S Resende, Patricia R Sousa, Simao Silva, Luis Antunes, and Joao Gama. 2022. Host-based IDS: A Review and Open Issues of An Anomaly Detection System in IoT. Future Generation Computer Systems 133 (2022), 95-113." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.633, + 0.908, + 0.675 + ], + "angle": 0, + "content": "[151] Weibin Meng, Ying Liu, Yichen Zhu, Shenglin Zhang, Dan Pei, Yuqing Liu, Yihao Chen, Ruizhi Zhang, Shimin Tao, Pei Sun, et al. 2019. LogAnomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs. In Proceedings of the International Joint Conference on Artificial Intelligence, Vol. 19. 4739-4745." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.675, + 0.908, + 0.702 + ], + "angle": 0, + "content": "[152] Noor Michael, Jaron Mink, Jason Liu, Sneha Gaur, Wajih Ul Hassan, and Adam Bates. 2020. On the Forensic Validity of Approximated Audit Logs. In Proceedings of the 36th Annual Computer Security Applications Conference. 189-202." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.702, + 0.908, + 0.73 + ], + "angle": 0, + "content": "[153] Microsoft. [n.d]. Event Tracing - Win32 apps. https://learn.microsoft.com/en-us/windows/win32/etw/event-tracing-portal. 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.73, + 0.908, + 0.757 + ], + "angle": 0, + "content": "[154] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781 (2013)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.758, + 0.908, + 0.785 + ], + "angle": 0, + "content": "[155] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. Advances in Neural Information Processing Systems 26 (2013)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.786, + 0.908, + 0.827 + ], + "angle": 0, + "content": "[156] Sadegh M Milajerdi, Birhanu Eshete, Rigel Gjomemo, and VN Venkatakrishnan. 2019. Poirot: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1795-1812." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.828, + 0.908, + 0.868 + ], + "angle": 0, + "content": "[157] Sadegh M Milajerdi, Rigel Gjomemo, Birhanu Eshete, Ramachandran Sekar, and VN Venkatakrishnan. 2019. Holmes: Real-time APT Detection through Correlation of Suspicious Information Flows. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. IEEE, 1137-1152." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.869, + 0.908, + 0.911 + ], + "angle": 0, + "content": "[158] Seyed Mohammad Mehdi Mirnajafizadeh, Ashwin Raam Sethuram, David Mohaisen, DaeHun Nyang, and Rhongho Jang. 2024. Enhancing Network Attack Detection with Distributed and In-Network Data Collection System. In Proceedings of the 33rd USENIX Security Symposium. 5161-5178." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.908, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.093, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:34" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.148 + ], + "angle": 0, + "content": "[159] Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Proceedings of the Network and Distributed Systems Security Symposium (2018)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.149, + 0.907, + 0.175 + ], + "angle": 0, + "content": "[160] Kunal Mukherjee and Murat Kantarcioglu. 2025. LLM-driven Provenance Forensics for Threat Investigation and Detection. arXiv preprint arXiv:2508.21323 (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.177, + 0.907, + 0.217 + ], + "angle": 0, + "content": "[161] Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, James Wei, Feng Chen, Muhyun Kim, Murat Kantarcioglu, and Kangkook Jee. 2023. Evading Provenance-Based ML Detectors with Adversarial System Actions. In Proceedings of the 32nd USENIX Security Symposium. 1199-1216." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.218, + 0.908, + 0.258 + ], + "angle": 0, + "content": "[162] Muhammad Hassan Nasir, Salman A Khan, Muhammad Mubashir Khan, and Mahawish Fatima. 2022. Swarm Intelligence Inspired Intrusion Detection Systems—A Systematic Literature Review. Computer Networks 205 (2022), 108708." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.26, + 0.908, + 0.3 + ], + "angle": 0, + "content": "[163] Mostafa Nassar, Nirmeen A El-Bahnasawy, HossamEl-Din H Ahmed, Adel A Saleeb, and Fathi E Abd El-Samie. 2019. Network Intrusion Detection, Literature Review and Some Techniques Comparison. In Proceedings of the 2019 15th International Computer Engineering Conference. IEEE, 62-71." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.302, + 0.907, + 0.328 + ], + "angle": 0, + "content": "[164] Alexander Tobias Neumann, Yue Yin, Sulayman Sowe, Stefan Decker, and Matthias Jarke. 2024. An LLM-Driven Chatbot in Higher Education for Databases and Information Systems. IEEE Transactions on Education (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.329, + 0.907, + 0.356 + ], + "angle": 0, + "content": "[165] Zhibin Ni, Pan Fan, Shengzhuo Dai, Bo Zhang, Hai Wan, and Xibin Zhao. 2025. FG-CIBGC: A Unified Framework for Fine-Grained and Class-Incremental Behavior Graph Classification. In Proceedings of the Web Conference." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.357, + 0.907, + 0.397 + ], + "angle": 0, + "content": "[166] Weina Niu, Zhenqi Yu, Zimu Li, Beibei Li, Runzi Zhang, and Xiaosong Zhang. 2022. LogTracer: Efficient Anomaly Tracing Combining System Log Detection and Provenance Graph. In Proceedings of the IEEE Global Communications Conference. IEEE, 3356-3361." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.398, + 0.907, + 0.425 + ], + "angle": 0, + "content": "[167] Christine Nussbaum, Sascha Frühholz, and Stefan R Schweinberger. 2025. Understanding Voice Naturalness. Trends in Cognitive Sciences (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.426, + 0.864, + 0.44 + ], + "angle": 0, + "content": "[168] Connected Papers. 2020. Connected Papers: A Visual Tool for Researchers. https://wwwconnectedpapers.com" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.44, + 0.908, + 0.481 + ], + "angle": 0, + "content": "[169] Nohil Park, Heeseung Kim, Che Hyun Lee, Jooyoung Choi, Jiheum Yeom, and Sungroh Yoon. 2025. NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1-5." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.482, + 0.908, + 0.508 + ], + "angle": 0, + "content": "[170] Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer, and Jean Bacon. 2017. Practical Whole-System Provenance Capture. In Proceedings of the 2017 Symposium on Cloud Computing. 405-418." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.509, + 0.687, + 0.523 + ], + "angle": 0, + "content": "[171] Igor Pavlov. 2001. LZMA SDK (Software Development Kit). https://www.7-zip.org/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.523, + 0.908, + 0.563 + ], + "angle": 0, + "content": "[172] Cheng Peng, Xi Yang, Aokun Chen, Kaleb E Smith, Nima PourNejatian, Anthony B Costa, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al. 2023. A Study of Generative Large Language Model For Medical Research and Healthcare. NPJ Digital Medicine 6, 1 (2023), 210." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.565, + 0.908, + 0.605 + ], + "angle": 0, + "content": "[173] Yihao Peng, Tongxin Zhang, Jieshao Lai, Yuxuan Zhang, Yiming Wu, Hai Wan, and Xibin Zhao. 2025. AutoLabel: Automated Fine-Grained Log Labeling for Cyber Attack Dataset Generation. In 34th USENIX Security Symposium (USENIX Security 25). 547-566." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.606, + 0.798, + 0.619 + ], + "angle": 0, + "content": "[174] Prometheus. 2014. Prometheus - Monitoring System & Time Series Database. https://prometheus.io/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.62, + 0.908, + 0.66 + ], + "angle": 0, + "content": "[175] Jiaxing Qi, Zhongzhi Luan, Shaohan Huang, Carol Fung, Hailong Yang, and Depei Qian. 2023. SpikeLog: Log-based Anomaly Detection via Potential-Assisted Spiking Neuron Network. IEEE Transactions on Knowledge and Data Engineering 36, 12 (2023), 9322-9335." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.661, + 0.907, + 0.702 + ], + "angle": 0, + "content": "[176] Wei Qiao, Yebo Feng, Teng Li, Zhuo Ma, Yulong Shen, JianFeng Ma, and Yang Liu. 2025. Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning. In Proceedings of the 2025 on ACM SIGSAC Conference on Computer and Communications Security." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.703, + 0.673, + 0.716 + ], + "angle": 0, + "content": "[177] QuickLZ. 2006. QuickLZ: Fastest Compression Library. http://wwwquicklz.com/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.717, + 0.756, + 0.73 + ], + "angle": 0, + "content": "[178] Alec Radford. 2018. Improving Language Understanding by Generative Pre-Training. (2018)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.731, + 0.909, + 0.771 + ], + "angle": 0, + "content": "[179] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with A Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1-67." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.772, + 0.908, + 0.811 + ], + "angle": 0, + "content": "[180] Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. 2016. On the \"Naturalness\" of Buggy Code. In Proceedings of the 38th International Conference on Software Engineering. 428-439." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.812, + 0.908, + 0.84 + ], + "angle": 0, + "content": "[181] Bace Rebecca and Peter Mell. 2001. Intrusion Detection Systems. National Institute of Standards and Technology, Special Publication (2001)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.841, + 0.908, + 0.881 + ], + "angle": 0, + "content": "[182] Mati Ur Rehman, Hadi Ahmadi, and Wajih Ul Hassan. 2024. FLASH: A Comprehensive Approach to Intrusion Detection via Provenance Graph Representation Learning. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE Computer Society, 139-139." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.882, + 0.907, + 0.91 + ], + "angle": 0, + "content": "[183] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, Robust and Controllable Text to Speech. Advances in Neural Information Processing Systems 32 (2019)." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.909, + 0.91 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:35" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.148 + ], + "angle": 0, + "content": "[184] Andy Riddle, Kim Westfall, and Adam Bates. 2023. Atlasv2: Atlas attack engagements, version 2. arXiv preprint arXiv:2401.01341 (2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.148, + 0.907, + 0.175 + ], + "angle": 0, + "content": "[185] Malajah Roberts, Jonathan Anderson, William Delgado, Richard Johnson, and Lawrence Spencer. 2024. Extending Contextual Length and World Knowledge Generalization in Large Language Models. (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.177, + 0.907, + 0.204 + ], + "angle": 0, + "content": "[186] Kirk Rodrigues, Yu Luo, and Ding Yuan. 2021. CLP: Efficient and Scalable Search on Compressed Text Logs. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. 183-198." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.205, + 0.909, + 0.231 + ], + "angle": 0, + "content": "[187] Ronald Rosenfeld. 2000. Two Decades of Statistical Language Modeling: Where Do We Go from Here? Proceedings of the IEEE 88, 8 (2000), 1270-1278." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.232, + 0.908, + 0.259 + ], + "angle": 0, + "content": "[188] Tejaswini S and Azra Nasreen. 2021. Survey on Online Log Parsing. Regular issue (2021). https://api-semanticscholar.org/CorpusID:236861650" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.26, + 0.752, + 0.274 + ], + "angle": 0, + "content": "[189] Vijay Samuel. 2018. Monitoring Anything and Everything with Beats at eBay.(2018). (2018)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.274, + 0.698, + 0.287 + ], + "angle": 0, + "content": "[190] Michael Schindler. 1999. SZIP Compression. http://www.compressconsult.com/szip/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.288, + 0.908, + 0.314 + ], + "angle": 0, + "content": "[191] Frank Schwellinger. 2008. Ocamyd: A File (De-)Compressor Based on the DMC Algorithm. https://www.geocities.ws/ocamyd/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.315, + 0.908, + 0.355 + ], + "angle": 0, + "content": "[192] Issam Sedki, Abdelwahab Hamou-Lhadj, Otmane Ait-Mohamed, and Mohammed A Shehab. 2022. An Effective Approach for Parsing Large Log Files. In Proceedings of the 2022 IEEE International Conference on Software Maintenance and Evolution. IEEE, 1-12." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.357, + 0.908, + 0.384 + ], + "angle": 0, + "content": "[193] R Sekar, Hanke Kimm, and Rohit Aich. 2024. eAudit: A Fast, Scalable and Deployable Audit Data Collection System. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 3571-3589." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.385, + 0.693, + 0.398 + ], + "angle": 0, + "content": "[194] Julian Seward. 1996. bzip2: A High-Quality Data Compressor. http://www.bzip.org/" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.398, + 0.908, + 0.424 + ], + "angle": 0, + "content": "[195] Claude E Shannon. 1948. A Mathematical Theory of Communication. The Bell System Technical Journal 27, 3 (1948), 379-423." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.425, + 0.908, + 0.452 + ], + "angle": 0, + "content": "[196] Claude E Shannon. 1951. The Redundancy of English. In Cybernetics; Transactions of the 7th Conference, New York: Josiah Macy, Jr. Foundation. 248-272." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.453, + 0.908, + 0.493 + ], + "angle": 0, + "content": "[197] Madhukar Shrestha, Yonghyun Kim, Jeehyun Oh, Junghwan Rhee, Yung Ryn Choe, Fei Zuo, Myungah Park, and Gang Qian. 2023. ProvSec: Open Cybersecurity System Provenance Analysis Benchmark Dataset with Labels. International Journal of Networked and Distributed Computing 11, 2 (2023), 112-123." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.495, + 0.908, + 0.522 + ], + "angle": 0, + "content": "[198] Rakesh Shrestha, Atefeh Omidkar, Sajjad Ahmadi Roudi, Robert Abbas, and Shiho Kim. 2021. Machine-Learning-Enabled Intrusion Detection System for Cellular Connected UAV Networks. *Electronics* 10, 13 (2021), 1549." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.523, + 0.908, + 0.564 + ], + "angle": 0, + "content": "[199] Zhuoxue Song, Ziming Zhao, Fan Zhang, Gang Xiong, Guang Cheng, Xinjie Zhao, Shize Guo, and Binbin Chen. 2022. I²RNN: An Incremental and Interpretable Recurrent Neural Network for Encrypted Traffic Classification. IEEE Transactions on Dependable and Secure Computing (2022)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.565, + 0.908, + 0.617 + ], + "angle": 0, + "content": "[200] Manolis Stamatogiannakis, Paul Groth, and Herbert Bos. 2015. Looking Inside the Black-Box: Capturing Data Provenance Using Dynamic Instrumentation. In Provenance and Annotation of Data and Processes: 5th International Provenance and Annotation Workshop, IPAW 2014, Cologne, Germany, June 9-13, 2014. Revised Selected Papers 5. Springer, 155-167." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.619, + 0.908, + 0.658 + ], + "angle": 0, + "content": "[201] Branka Stojanovic, Katharina Hofer-Schmitz, and Ulrike Kleb. 2020. APT Datasets and Attack Modeling for Automated Detection Methods: A Review. Computer Security 92 (2020), 101734. https://apisemantic scholar.org/CorpusID:213320542" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.661, + 0.908, + 0.702 + ], + "angle": 0, + "content": "[202] Hongbin Sun, Su Wang, Zhiliang Wang, Zheyu Jiang, Dongqi Han, and Jiahai Yang. 2024. AudiTrim: A Real-time, General, Efficient, and Low-overhead Data Compaction System for Intrusion Detection. In Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses. 263-277." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.703, + 0.908, + 0.743 + ], + "angle": 0, + "content": "[203] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntellicodeCompose: Code Generation Using Transformer. In Proceedings of the 28th ACM joint meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1433-1443." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.744, + 0.908, + 0.785 + ], + "angle": 0, + "content": "[204] Dan Tang, Yudong Yan, Chenjun Gao, Wei Liang, and Wenqiang Jin. 2023. LtRFT: Mitigate the Low-Rate Data Plane DDoS Attack with Learning-to-Rank Enabled Flow Tables. IEEE Transactions on Information Forensics and Security 18 (2023), 3143-3157." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.786, + 0.908, + 0.827 + ], + "angle": 0, + "content": "[205] Yutao Tang, Ding Li, Zhichun Li, Mu Zhang, Kangkook Jee, Xusheng Xiao, Zhenyu Wu, Junghwan Rhee, Fengyuan Xu, and Qun Li. 2018. NodeMerge: Template Based Efficient Data Reduction for Big-Data Causality Analysis. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 1324–1337." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.828, + 0.907, + 0.855 + ], + "angle": 0, + "content": "[206] Joerg Thalheim, Pramod Bhatotia, and Christof Fetzer. 2016. Inspector: Data Provenance Using Intel Processor Trace (PT). In Proceedings of the 2016 IEEE 36th International Conference on Distributed Computing Systems. IEEE, 25-34." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.856, + 0.908, + 0.895 + ], + "angle": 0, + "content": "[207] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language Models for Dialog Applications. arXiv preprint arXiv:2201.08239 (2022)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.896, + 0.74, + 0.911 + ], + "angle": 0, + "content": "[208] ThoughtWorks. 2004. Selenium RC. http://www.seleniumhq.org/projects/remote-control/" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.909, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:36" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.161 + ], + "angle": 0, + "content": "[209] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971 (2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.163, + 0.538, + 0.176 + ], + "angle": 0, + "content": "[210] Aqua Tracee. 2022. Runtime eBPF Threat Detection Engine." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.177, + 0.907, + 0.204 + ], + "angle": 0, + "content": "[211] Devharsh Trivedi, Aymen Boudguiga, Nesrine Kaaniche, and Nikos Triandopoulos. 2023. SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation. Cryptography 7, 4 (2023), 52." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.205, + 0.907, + 0.232 + ], + "angle": 0, + "content": "[212] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Processing Systems 30 (2017)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.233, + 0.907, + 0.259 + ], + "angle": 0, + "content": "[213] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017. Graph Attention Networks. stat 1050, 20 (2017), 10-48550." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.26, + 0.907, + 0.286 + ], + "angle": 0, + "content": "[214] Arthur Vervaet, Raja Chiky, and Mar Callau-Zori. 2021. USTEP: Unfixed Search Tree for Efficient Log Parsing. In Proceedings of the 2021 IEEE International Conference on Data Mining. IEEE, 659-668." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.288, + 0.909, + 0.314 + ], + "angle": 0, + "content": "[215] David Wagner and Paolo Soto. 2002. Mimicry Attacks on Host-Based Intrusion Detection Systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security. 255-264." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.315, + 0.908, + 0.357 + ], + "angle": 0, + "content": "[216] Qi Wang, Wajih Ul Hassan, Ding Li, Kangkook Jee, Xiao Yu, Kexuan Zou, Junghwan Rhee, Zhengzhang Chen, Wei Cheng, Carl A Gunter, et al. 2020. You Are What You Do: Hunting Stealthy Malware via Data Provenance Analysis. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.357, + 0.908, + 0.398 + ], + "angle": 0, + "content": "[217] Rui Wang, Devin Gibson, Kirk Rodrigues, Yu Luo, Yun Zhang, Kaibo Wang, Yupeng Fu, Ting Chen, and Ding Yuan. 2024. \\(\\mu\\)Slope: High Compression and Fast Search on Semi-Structured Logs. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation. 529-544." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.399, + 0.908, + 0.438 + ], + "angle": 0, + "content": "[218] Ruihua Wang, Yihao Peng, Yilun Sun, Xuancheng Zhang, Hai Wan, and Xibin Zhao. 2023. TeSec: Accurate Server-Side Attack Investigation for Web Applications. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. IEEE, 2799-2816." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.439, + 0.908, + 0.481 + ], + "angle": 0, + "content": "[219] Su Wang, Zhiliang Wang, Tao Zhou, Hongbin Sun, Xia Yin, Dongqi Han, Han Zhang, Xingang Shi, and Jiahai Yang. 2022. threaTrace: Detecting and Tracing Host-Based Threats in Node Level Through Provenance Graph Learning. IEEE Transactions on Information Forensics and Security 17 (2022), 3972-3987." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.482, + 0.908, + 0.522 + ], + "angle": 0, + "content": "[220] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent Abilities of Large Language Models. arXiv preprint arXiv:2206.07682 (2022)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.523, + 0.908, + 0.563 + ], + "angle": 0, + "content": "[221] Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, and Xiaofeng Zou. 2024. HEN: A Novel Hybrid Explainable Neural Network Based Framework for Robust Network Intrusion Detection. Science China Information Sciences 67, 7 (2024), 170304." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.565, + 0.908, + 0.605 + ], + "angle": 0, + "content": "[222] Cong Wu, Jianfei Sun, Jing Chen, Mamoun Alazab, Yang Liu, and Yang Xiang. 2025. TCG-IDS: Robust Network Intrusion Detection via Temporal Contrastive Graph Learning. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.606, + 0.908, + 0.633 + ], + "angle": 0, + "content": "[223] Weiheng Wu, Wei Qiao, Teng Li, Yebo Feng, Zhuo Ma, Jianfeng Ma, and Yang Liu. 2025. ProvX: Generating Counterfactual-Driven Attack Explanations for Provenance-Based Detection. arXiv preprint arXiv:2508.06073 (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.634, + 0.908, + 0.674 + ], + "angle": 0, + "content": "[224] Yafeng Wu, Yulai Xie, Xuelong Liao, Pan Zhou, Dan Feng, Lin Wu, Xuan Li, Avani Wildani, and Darrell Long. 2022. Paradise: Real-Time, Generalized, and Distributed Provenance-Based Intrusion Detection. IEEE Transactions on Dependable and Secure Computing 20, 2 (2022), 1624-1640." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.675, + 0.908, + 0.715 + ], + "angle": 0, + "content": "[225] Yixuan Wu, Long Zhang, Lin Yang, Feng Yang, Linru Ma, Zhoumin Lu, and Wen Jiang. 2025. Intrusion Detection for Internet of Things: An Anchor Graph Clustering Approach. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.716, + 0.908, + 0.757 + ], + "angle": 0, + "content": "[226] Tong Xiao, Zhe Quan, Zhi-Jie Wang, Kaiqi Zhao, Xiangke Liao, Huang Huang, Yunfei Du, and Kenli Li. 2023. LPV: A Log Parsing Framework Based on Vectorization. IEEE Transactions on Network and Service Management 20, 3 (2023), 2711-2725." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.758, + 0.908, + 0.799 + ], + "angle": 0, + "content": "[227] Yulai Xie, Dan Feng, Yuchong Hu, Yan Li, Staunton Sample, and Darrell Long. 2018. Pagoda: A Hybrid Approach to Enable Efficient Real-Time Provenance Based Intrusion Detection in Big Data Environments. IEEE Transactions on Dependable and Secure Computing 17, 6 (2018), 1283-1296." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.8, + 0.908, + 0.827 + ], + "angle": 0, + "content": "[228] Yulai Xie, Kiran-Kumar Muniswamy-Reddy, Darrell DE Long, Ahmed Amer, Dan Feng, and Zhipeng Tan. 2011. Compressing Provenance Graphs. In Proceedings of the 3rd USENIX Workshop on the Theory and Practice of Provenance." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.828, + 0.908, + 0.868 + ], + "angle": 0, + "content": "[229] Junjielong Xu, Qiuai Fu, Zhourui xing Zhu, Yutong Cheng, Zhijing Li, Yuchi Ma, and Pinjia He. 2023. Hue: A User-Adaptive Parser for Hybrid Logs. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 413-424." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.869, + 0.908, + 0.909 + ], + "angle": 0, + "content": "[230] Jiacen Xu, Xiaokui Shu, and Zhou Li. 2024. Understanding and Bridging the Gap between Unsupervised Network Representation Learning and Security Analytics. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3590-3608." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.909, + 0.909 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "header", + "bbox": [ + 0.091, + 0.084, + 0.504, + 0.097 + ], + "angle": 0, + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + }, + { + "type": "page_number", + "bbox": [ + 0.878, + 0.085, + 0.906, + 0.095 + ], + "angle": 0, + "content": "1:37" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.908, + 0.16 + ], + "angle": 0, + "content": "[231] Wei Xu, Ling Huang, Armando Fox, David Patterson, and Michael I Jordan. 2009. Detecting Large-scale System Problems by Mining Console Logs. In Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles. 117-132." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.161, + 0.908, + 0.203 + ], + "angle": 0, + "content": "[232] Zhiqiang Xu, Pengcheng Fang, Changlin Liu, Xusheng Xiao, Yu Wen, and Dan Meng. 2022. DepComm: Graph Summarization on System Audit Logs for Attack Investigation. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 540-557." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.204, + 0.908, + 0.245 + ], + "angle": 0, + "content": "[233] Zhiwei Xu, Shaohua Qiang, Dinghong Song, Min Zhou, Hai Wan, Xibin Zhao, Ping Luo, and Hongyu Zhang. 2024. DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1-12." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.246, + 0.907, + 0.286 + ], + "angle": 0, + "content": "[234] Zhang Xu, Zhenyu Wu, Zhichun Li, Kangkook Jee, Junghwan Rhee, Xusheng Xiao, Fengyuan Xu, Haining Wang, and Guofei Jiang. 2016. High Fidelity Data Reduction for Big Data Security Dexterity Analyses. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 504-516." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.287, + 0.907, + 0.328 + ], + "angle": 0, + "content": "[235] Zhiwei Xu, Min Zhou, Xibin Zhao, Yang Chen, Xi Cheng, and Hongyu Zhang. 2023. xASTNN: Improved Code Representations for Industrial Practice. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1727-1738." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.329, + 0.907, + 0.37 + ], + "angle": 0, + "content": "[236] Yu Xue, Bernard-marie Onzo, and Ferrante Neri. 2021. Intrusion Detection System Based on an Updated ANN Model. In Advances in Swarm Intelligence: 12th International Conference, ICSI 2021, Qingdao, China, July 17-21, 2021, Proceedings, Part II 12. Springer, 472-479." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.371, + 0.906, + 0.398 + ], + "angle": 0, + "content": "[237] Fan Yang, Jiacen Xu, Chunlin Xiong, Zhou Li, and Kehuan Zhang. 2023. ProGrapher: An Anomaly Detection System based on Provenance Graph Embedding. In Proceedings of the 32nd USENIX Security Symposium. 4355-4372." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.399, + 0.908, + 0.439 + ], + "angle": 0, + "content": "[238] Lin Yang, Junjie Chen, Zan Wang, Weijing Wang, Jiajun Jiang, Xuyuan Dong, and Wenbin Zhang. 2021. Semi-Supervised Log-Based Anomaly Detection via Probabilistic Label Estimation. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 1448-1460." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.44, + 0.908, + 0.481 + ], + "angle": 0, + "content": "[239] Runqing Yang, Shiqing Ma, Haitao Xu, Xiangyu Zhang, and Yan Chen. 2020. UIScope: Accurate, Instrumentation-free, and Visible Attack Investigation for GUI Applications. In Proceedings of the Network and Distributed Systems Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.481, + 0.907, + 0.508 + ], + "angle": 0, + "content": "[240] Zhaohui Yang, Wei Xu, Le Liang, Yuanhao Cui, Zhijin Qin, and Mérouane Debbah. 2025. On Privacy, Security, and Trustworthiness in Distributed Wireless Large AI Models. Science China Information Sciences 68, 7 (2025), 1-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.509, + 0.907, + 0.536 + ], + "angle": 0, + "content": "[241] Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Yu-Yang Liu, and Li Yuan. 2023. LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. arXiv preprint arXiv:2310.01469 (2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.537, + 0.907, + 0.564 + ], + "angle": 0, + "content": "[242] Kundi Yao, Heng Li, Weiyi Shang, and Ahmed E Hassan. 2020. A Study of the Performance of General Compressors on Log Files. Empirical Software Engineering 25 (2020), 3043-3085." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.565, + 0.907, + 0.592 + ], + "angle": 0, + "content": "[243] Kundi Yao, Mohammed Sayagh, Weiyi Shang, and Ahmed E Hassan. 2021. Improving State-of-the-Art Compression Techniques for Log Management Tools. IEEE Transactions on Software Engineering 48, 8 (2021), 2748-2760." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.592, + 0.907, + 0.62 + ], + "angle": 0, + "content": "[244] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. *High-Confidence Computing* (2024), 100211." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.62, + 0.908, + 0.66 + ], + "angle": 0, + "content": "[245] Heng Yin, Dawn Song, Manuel Egele, Christopher Kruegel, and Engin Kirda. 2007. Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 116-127." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.66, + 0.908, + 0.702 + ], + "angle": 0, + "content": "[246] Kun Yin, Meng Yan, Ling Xu, Zhou Xu, Zhao Li, Dan Yang, and Xiaohong Zhang. 2020. Improving Log-Based Anomaly Detection with Component-Aware Analysis. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution. IEEE, 667-671." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.702, + 0.908, + 0.744 + ], + "angle": 0, + "content": "[247] Guangba Yu, Pengfei Chen, Pairui Li, Tianjun Weng, Haibing Zheng, Yuetang Deng, and Zibin Zheng. 2023. LogReducer: Identify and Reduce Log Hotspots in Kernel on the Fly. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 1763-1775." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.744, + 0.908, + 0.786 + ], + "angle": 0, + "content": "[248] Le Yu, Shiqing Ma, Zhuo Zhang, Guanhong Tao, Xiangyu Zhang, Dongyan Xu, Vincent E Urias, Han Wei Lin, Gabriela F Ciocarlie, Vinod Yegneswaran, et al. 2021. ALchemist: Fusing Application and Audit Logs for Precise Attack Provenance without Instrumentation. In Proceedings of the Network and Distributed System Security Symposium." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.786, + 0.908, + 0.826 + ], + "angle": 0, + "content": "[249] Siyu Yu, Yifan Wu, Ying Li, and Pinjia He. 2024. Unlocking the Power of Numbers: Log Compression via Numeric Token Parsing. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 919-930." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.827, + 0.908, + 0.868 + ], + "angle": 0, + "content": "[250] Jun Zengy, Xiang Wang, Jiahao Liu, Yinfang Chen, Zhenkai Liang, Tat-Seng Chua, and Zheng Leong Chua. 2022. ShadeWatcher: Recommendation-Guided Cyber Threat Analysis Using System Audit Records. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 489-506." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.869, + 0.908, + 0.911 + ], + "angle": 0, + "content": "[251] Chao Zha, Zhiyu Wang, Yifei Fan, Bing Bai, Yinjie Zhang, Sainan Shi, and Ruyun Zhang. 2025. A-NIDS: Adaptive Network Intrusion Detection System based on Clustering and Stacked CTGAN. IEEE Transactions on Information Forensics and Security (2025)." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.908, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.481, + 0.934, + 0.906, + 0.948 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.092, + 0.085, + 0.121, + 0.095 + ], + "angle": 0, + "content": "1:38" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.084, + 0.908, + 0.097 + ], + "angle": 0, + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.12, + 0.909, + 0.161 + ], + "angle": 0, + "content": "[252] Bo Zhang, Yansong Gao, Changlong Yu, Boyu Kuang, Zhi Zhang, Hyoungshick Kim, and Anmin Fu. 2025. TAPAS: An Efficient Online APT Detection with Task-guided Process Provenance Graph Segmentation and Analysis. In Proceedings of the USENIX Security Symposium. 607-624." + }, + { + "type": "ref_text", + "bbox": [ + 0.091, + 0.162, + 0.908, + 0.203 + ], + "angle": 0, + "content": "[253] Pei Zhang, Fangzhou He, Han Zhang, Jiankun Hu, Xiaohong Huang, Jilong Wang, Xia Yin, Huahong Zhu, and Yahui Li. 2023. Real-Time Malicious Traffic Detection with Online Isolation Forest over SD-WAN. IEEE Transactions on Information Forensics and Security 18 (2023), 2076-2090." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.204, + 0.909, + 0.245 + ], + "angle": 0, + "content": "[254] Shenglin Zhang, Yuhe Ji, Jiaqi Luan, Xiaohui Nie, Ziang Chen, Minghua Ma, Yongqian Sun, and Dan Pei. 2024. End-to-End Automl for Unsupervised Log Anomaly Detection. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 1680–1692." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.246, + 0.908, + 0.286 + ], + "angle": 0, + "content": "[255] Tianzhu Zhang, Han Qiu, Gabriele Castellano, Myriana Rifai, Chung Shue Chen, and Fabio Pianese. 2023. System Log Parsing: A Survey. IEEE Transactions on Knowledge and Data Engineering 35, 8 (2023), 8596-8614. https://doi.org/10.1109/TKDE.2022.3222417" + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.287, + 0.907, + 0.314 + ], + "angle": 0, + "content": "[256] Tianye Zhang, Xumeng Wang, Zongzhuang Li, Fangzhou Guo, Yuxin Ma, and Wei Chen. 2017. A Survey of Network Anomaly Visualization. Science China Information Sciences 60, 12 (2017), 121101." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.315, + 0.909, + 0.37 + ], + "angle": 0, + "content": "[257] Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, et al. 2019. Robust Log-Based Anomaly Detection on Unstable Log Data. In Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. 807-817." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.371, + 0.908, + 0.411 + ], + "angle": 0, + "content": "[258] Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. 2024. Revolutionizing Finance with LLMs: An Overview of Applications and Insights. arXiv preprint arXiv:2401.11641 (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.412, + 0.908, + 0.453 + ], + "angle": 0, + "content": "[259] Jianjin Zhao, Qi Li, Zewei Han, Junsong Fu, Guoshun Nan, Meng Shen, and Bharat K Bhargava. 2024. ReTrial: Robust Encrypted Malicious Traffic Detection via Discriminative Relation Incorporation and Misleading Relation Correction. IEEE Transactions on Information Forensics and Security (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.453, + 0.908, + 0.494 + ], + "angle": 0, + "content": "[260] Ruijie Zhao, Xianwen Deng, Zhicong Yan, Jun Ma, Zhi Xue, and Yijun Wang. 2022. MT-FlowFormer: A Semi-Supervised Flow Transformer for Encrypted Traffic Classification. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2576-2584." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.495, + 0.907, + 0.522 + ], + "angle": 0, + "content": "[261] Ying Zhao, FangFang Zhou, XiaoPing Fan, Xing Liang, and YongGang Liu. 2013. IDSRadar: A Real-Time Visualization Framework for IDS Alerts. Science China Information Sciences 56, 8 (2013), 1-12." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.522, + 0.908, + 0.564 + ], + "angle": 0, + "content": "[262] Ziming Zhao, Zhaoxuan Li, Jialun Jiang, Fengyuan Yu, Fan Zhang, Congyuan Xu, Xinjie Zhao, Rui Zhang, and Shize Guo. 2022. ERNN: Error-Resilient RNN for Encrypted Traffic Detection Towards Network-Induced Phenomena. IEEE Transactions on Dependable and Secure Computing (2022)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.565, + 0.908, + 0.604 + ], + "angle": 0, + "content": "[263] Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhuoxue Song, and Zhaoxuan Li. 2024. Effective DDoS Mitigation via ML-Driven In-Network Traffic Shaping. IEEE Transactions on Dependable and Secure Computing 21, 4 (2024), 4271-4289." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.605, + 0.908, + 0.647 + ], + "angle": 0, + "content": "[264] Ying Zhong, Zhiliang Wang, Xingang Shi, Jiahai Yang, and Keqin Li. 2024. RFG-HELAD: A Robust Fine-Grained Network Traffic Anomaly Detection Model Based on Heterogeneous Ensemble Learning. IEEE Transactions on Information Forensics and Security (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.647, + 0.908, + 0.688 + ], + "angle": 0, + "content": "[265] Junwei Zhou, Shaowen Ying, Shulan Wang, Dongdong Zhao, Jianwen Xiang, Kaitai Liang, and Peng Liu. 2025. LogDLR: Unsupervised Cross-System Log Anomaly Detection Through Domain-Invariant Latent Representation. IEEE Transactions on Dependable and Secure Computing (2025)." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.689, + 0.908, + 0.73 + ], + "angle": 0, + "content": "[266] Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R Lyu. 2023. Loghub: A Large Collection of System Log Datasets for AI-Driven Log Analytics. In Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering. IEEE, 355-366." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.73, + 0.908, + 0.771 + ], + "angle": 0, + "content": "[267] Tiantian Zhu, Jiayu Wang, Linqi Ruan, Chunlin Xiong, Jinkai Yu, Yaosheng Li, Yan Chen, Mingqi Lv, and Tieming Chen. 2021. General, Efficient, and Real-Time Data Compaction Strategy for APT Forensic Analysis. IEEE Transactions on Information Forensics and Security 16 (2021), 3312-3325." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.772, + 0.908, + 0.812 + ], + "angle": 0, + "content": "[268] Tiantian Zhu, Jinkai Yu, Chunlin Xiong, Wenrui Cheng, Qixuan Yuan, Jie Ying, Tieming Chen, Jiabo Zhang, Mingqi Lv, Yan Chen, et al. 2023. APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. IEEE Transactions on Dependable and Secure Computing 20, 6 (2023), 5247-5264." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.813, + 0.908, + 0.841 + ], + "angle": 0, + "content": "[269] Yao Zhu, LI Zhenyuan, Yangyang Wei, and Shouling Ji. 2025. The Case for Learned Provenance-based System Behavior Baseline. In Forty-second International Conference on Machine Learning." + }, + { + "type": "ref_text", + "bbox": [ + 0.092, + 0.841, + 0.908, + 0.88 + ], + "angle": 0, + "content": "[270] Michael Zipperle, Florian Gottwalt, Elizabeth Chang, and Tharam S. Dillon. 2022. Provenance-based Intrusion Detection Systems: A Survey. ACM Computing Surveys 55 (2022), 1 - 36. https://api-semanticscholar.org/CorpusID:249579087" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.12, + 0.909, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.088, + 0.934, + 0.514, + 0.947 + ], + "angle": 0, + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_origin.pdf b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0bfd502abef54026fc3de00d10e4f7608d8e010c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/e60cb9ee-e216-46b4-a879-cab7695d37bd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eda784656bed0ee3573a6a62545ee4bafaaf7739a81a0574bff662d7846a40b9 +size 3232094 diff --git a/data/2025/2504_07xxx/2504.07839/full.md b/data/2025/2504_07xxx/2504.07839/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d8b212fa03a04b09c19f9a7d0e5587afba593dfb --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/full.md @@ -0,0 +1,709 @@ +# Deep Learning-based Intrusion Detection Systems: A Survey + +ZHIWEI XU, YUJUAN WU, SHIHENG WANG, JIABAO GAO, TIAN QIU, ZIQI WANG, HAI WAN, and XIBIN ZHAO*, KLISS, BNRist, School of Software, Tsinghua University, China + +Intrusion Detection Systems (IDS) have long been a hot topic in the cybersecurity community. In recent years, with the introduction of deep learning (DL) techniques, IDS have made great progress due to their increasing generalizability. The rationale behind this is that by learning the underlying patterns of known system behaviors, IDS detection can be generalized to intrusions that exploit zero-day vulnerabilities. In this survey, we refer to this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this survey systematically reviews all the stages of DL-IDS, including data collection, log storage, log parsing, graph summarization, attack detection, and attack investigation. To accommodate current researchers, a section describing the publicly available benchmark datasets is included. This survey further discusses current challenges and potential future research directions, aiming to help researchers understand the basic ideas and visions of DL-IDS research, as well as to motivate their research interests. + +CCS Concepts: $\cdot$ Security and privacy $\rightarrow$ Intrusion detection systems; $\cdot$ Computing methodologies $\rightarrow$ Machine learning; $\cdot$ General and reference $\rightarrow$ Surveys and overviews. + +Additional Key Words and Phrases: Intrusion detection systems, deep learning, survey + +# ACM Reference Format: + +Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao. 2025. Deep Learning-based Intrusion Detection Systems: A Survey. J. ACM 1, 1, Article 1 (October 2025), 38 pages. + +# 1 INTRODUCTION + +The promising Internet of Everything connects people, processes, data, and things through the Internet [51], bringing convenience and efficiency to the world. Yet its inevitable security vulnerabilities could be exploited by deliberate attackers. With increasingly sophisticated attack methods such as Advanced Persistent Threat (APT), the attackers are in a threatening position to sabotage network systems or steal sensitive data. The detection of intrusions, particularly based on DL, has consequently been a prominent topic in the cybersecurity community. + +The automated system for detecting intrusions is known as IDS. The limitations of IDS may result in terrible damage to enterprises. One example is the recent Colonial Pipeline Ransomware Attack [16]. In April 2021, the hacking group DarkSide launched a ransomware attack on Colonial Pipeline, the biggest oil pipeline company in the United States, using an unused VPN account. Due to this attack, 5,500 miles of transportation pipelines were forced to shut down, affecting nearly $45\%$ of the fuel supply on the Eastern Coast. The Colonial Pipeline paid $4.4 million ransom money, in addition to the theft of over 100 GB of data. If the malware intrusion can be detected in time, the influence of this attack can be greatly mitigated or even eliminated. + +# 1.1 Tough but Bright Intrusion Detection System + +IDS have been increasingly challenged to effectively deal with intrusions for decades. It is noted in Figure 1(a) that the number of $\mathrm{CVE}^1$ records has presented an accelerating uptrend, especially + +![](images/0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg) +(a) Trend of CVE records and IDS papers. +Fig. 1. Recent situation of IDS. + +![](images/38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg) +(b) Category of CNNVD vulnerabilities. + +in 2016, which suffered a sharp rise. After 2016, the number of CVE records stays growing at a high speed, reaching around 30,000 in 2024. Besides, according to the $\mathrm{CNNVD}^2$ report shown in Figure 1(b), we can observe that almost all (i.e., $97.2\%$ ) vulnerabilities are medium risk or above, with high and critical risk accounting for $40\%$ of them. The growing number of vulnerabilities and the large percentage of high-risk vulnerabilities both reveal the tough situation faced by IDS. + +Nevertheless, an interesting observation from Figure 1(a) is that, against the number of CVE records, DL-IDS papers also started to emerge in 2016 and their amount grew year by year subsequently. We can notably find that the growth trend of DL-IDS papers is nearly the same as that of CVE records. The potential reason can be speculated as DL is an effective way for IDS to cope with their tough situation. Borrowing the strong generalizability from DL techniques, DL-IDS detection can be extended to zero-day intrusions that are almost impossible to detect with the traditional DL-IDS. Some studies [219, 237, 250] demonstrate this speculation. In their experiments, DL-IDS are all reported with an achievement of over $90\%$ detection accuracy while the traditional DL-IDS sometimes only have around $50\%$ detection accuracy. + +The IDS future is not only tough but also bright with the aid of DL - it is evident that the growth in the number of IDS papers primarily comes from those based on DL techniques. The proportion of DL-IDS papers rises from about $0\%$ in 2016 to a very high $65.7\%$ in 2024. This phenomenon reflects the great interests and visions of the cybersecurity community in DL-IDS. To date, the DL-IDS development has almost reached a decade, and thus, it is time, and also essential, to revisit how DL and IDS interact, identify emerging trends, and guide future research directions. + +# 1.2 Related Surveys and Our Scope + +Unfortunately, none of the related surveys in the last decade have systematically investigated DL-IDS. On one hand, some related surveys may only focus on a few parts of DL-IDS, such as log parsers [138, 188, 255], datasets [201], attack modeling [10, 201], and specific DL technique type [17]. On the other hand, while several surveys [21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270] involve some DL-based approaches, they did not review DL-IDS from the perspective of DL particularly. + +Partial Investigation for DL-IDS. The surveys [10, 138, 188, 201, 255] are the typical example papers describing only a few parts of DL-IDS. Among them, Adel et al. [10] mainly studied various + +techniques and solutions that were tailored to APT attacks, as well as discussed where to make the APT detection framework smart. Scott et al. [138] and Tejaswini et al. [188] dually discussed online log parsers and their applications for anomaly detection. Branka et al. [201] review APT datasets and their creation, along with feature engineering in attack modeling. Zhang et al. [255] created an exhaustive taxonomy of system log parsers and empirically analyzed the critical performance and operational features of 17 open-source log parsers. Tristan et al. [17] focused on the applications of graph neural networks (GNNs) to IDS. For DL-IDS, all the above surveys are obviously insufficient to advance research understanding and provide theoretical suggestions. + +Different Perspectives from DL-IDS. Another type of existing surveys involved DL-IDS but studied them from the other perspectives [4, 21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270]. Specifically, the surveys [105, 128] aim to give an elaborate image of IDS and comprehensively explain methods from signature checking to anomaly detection algorithms. Originating from log data, the survey [83] presented a detailed overview of automated log analysis for reliability engineering and introduced three tasks including anomaly detection, failure prediction, and failure diagnosis. In survey [162], Nasir et al. explored the efficacy of swarm intelligence on IDS and highlighted the corresponding challenges in multi-objective IDS problems. + +Additionally, data types inspire and contribute significantly to the related surveys, whose categories include host-based IDS (HIDS) [21, 127, 140, 150, 270] and network-based IDS (NIDS) [4, 163]. Bridges et al. [21] focused on IDS leveraging host data for the enterprise network. Martins et al. [150] brought the HIDS concept to the Internet of Things. As a representative form of data in HIDS, the provenance graph [127, 140, 270] and its reduction techniques [96] were also extensively studied in survey literature. In NIDS, Nassar et al. [163] studied the techniques of network intrusion detection, especially those with machine learning (ML). Ahmad et al. [4] further incorporated ML and DL into their NIDS survey and studied the downstream learning methods duallyedly. + +The above surveys, however, lack investigation and discussion about DL-IDS. DL techniques are only what they cover or involve, rather than the primary focus of their research. + +Scope of Our Survey. Our work distinguishes the related surveys by providing a comprehensive literature review of DL-IDS. From the perspective of DL, our survey elaborates on a common workflow of DL-IDS and introduces the corresponding taxonomies of all modules within this workflow. Moreover, our survey discusses the possible challenges and research visions for DL-IDS, which include many DL-related issues that have not yet been studied by the existing surveys. + +# 1.3 Contributions and Organization + +In summary, this survey makes the following contributions: + +- Realizing that IDS has made significant progress with the aid of DL over the last decade, we present a thorough survey for DL-IDS, formalizing its definition and clarifying its location among other types of IDS. +- We outline the common workflow for DL-IDS, consisting of the data management stage and intrusion detection stage. We further systematically illustrate the research advances in all modules of this workflow and innovatively taxonomize the papers based on DL techniques +- From the perspective of DL, we discuss the potential challenges and future directions for DL-IDS, especially highlighting the ones unique to DL-IDS for accommodating current researchers. + +Survey Structure. Section 2 introduces the survey methodology of this work. Section 3 describes the background knowledge about DL-IDS. Section 4 and Section 5 elaborate the recent research trends on data management stage and intrusion detection stage, respectively. Section 6 illustrates + +![](images/51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg) +Fig. 2. Source distribution of references. + +![](images/c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg) +Fig. 3. Types of IDS. + +the benchmark datasets and their feature dimensions. Section 7 discusses the visions and challenges for future research. Lastly, the conclusion is presented in Section 8. + +# 2 SURVEY METHODOLOGY + +To start our literature review, we selected several popular literature databases, including Web of Science [12], IEEE Xplore [95], and Scopus [50], as the search engine. For search keywords, we determined from generalized terms associated with DL-IDS, such as intrusion detection system, attack investigation, anomaly detection, threat detection, Advanced Persistent Threats, data provenance analysis, forensic analysis, causality analysis, log collection, log compression, log parsing, log storage, and log summarization. Then, we employed Connected Papers [168], a visual tool that assists researchers in finding relevant academic papers, to ensure that we did not overlook the typical related literature. Since the found literature is numerous and rather generalized for the DL-IDS scope, we carefully checked their topics and prioritized only academic papers that are highly related. Finally, all these papers were filtered based on the impact factors of their published journals or academic conferences, leaving us a total of 131 papers. + +We identified a few venues that have published many significant papers in the field of DL-IDS, such as Usenix Security, S&P, CCS, NDSS, TIFS, TDSC, ICSE, ASE, ESEC/FSE, TSE, OSDI, NSDI, EuroSys, SOSP, ATC, ICML, KDD, WWW, TKDE, ICDE, and SCIS. We broadly divide them into five categories: security, software, system, data, and interdisciplinary. The distribution of these papers with their published years is reported in Figure 2. + +# 3 BACKGROUND + +# 3.1 Intrusion Detection System + +3.1.1 Definition of IDS. IDS have long been a central issue in the cybersecurity community, whose research can be traced back to the 1990s [181] or even earlier. According to the existing literature [64, 128, 162, 163, 181, 236], IDS can be defined progressively as follows: + +Definition 3.1. (Intrusion Detection System). Intrusion detection system is a software or hardware system to automate the process of intrusion detection. + +Definition 3.2. (Intrusion Detection). Intrusion detection is the process of monitoring and analyzing the events occurring in a computer or a network for signs of intrusions. + +Definition 3.3. (Intrusion). Intrusion is the attempt to undermine the confidentiality, integrity, and availability of a computer or a network, or to circumvent its security facilities. + +3.1.2 Types of IDS. Generally, IDS can be further categorized into various types based on their data sources [270]. Well-known types include NIDS, HIDS, and Provenance-based IDS (PIDS). Figure 3 depicts IDS types, their data sources, and the location of DL-IDS within those IDS types. + +Definition 3.4. (NIDS). NIDS are IDS whose data sources are network traffic between hosts. + +NIDS takes network traffic between hosts as its input. It is usually deployed at the edge or key node of the network, allowing it to secure the whole computer system with limited data. Benefiting from the global perception of the whole computer system, NIDS does well in large-scale multi-host intrusions such as Distributed Denial-of-Service (DDoS) attacks. However, NIDS performs poorly in intra-host intrusions and is difficult to analyze intrusions in the form of encrypted network traffic. + +Definition 3.5. (HIDS). HIDS are IDS whose data sources are system events within hosts. + +HIDS, in contrast, uncovers intrusions through system events of individual hosts. Its data sources include file system changes, system calls, process activities, etc. HIDS can conduct comprehensive detection for a host, and is not affected by encrypted data since the decryption is also performed in the host. Nevertheless, the deployment and maintenance of HIDS is relatively difficult. HIDS should be adapted to hosts of different operating systems and runtime environments. This simultaneously introduces computation overhead for the hosts. + +Definition 3.6. (PIDS). PIDS are HIDS whose data sources are data provenance. + +Definition 3.7. (Data Provenance). Data provenance refers to the origin and the processes that an event has undergone from its creation to its current state. + +PIDS is a subtype of HIDS, particularly referring to HIDS that utilizes data provenance as its data source. Due to analysis in the intact trail of events, PIDS is proven effective in coping with advanced attacks [270]. By performing causality analysis on data provenance, PIDS can significantly reduce false alarms. Yet, data provenance is very expensive to obtain, requiring complicated technical tools for monitoring operating systems, network protocols, and applications. + +Definition 3.8. (DL-IDS.) DL-IDS are IDS that utilize DL techniques to detect intrusions, whose data sources can be network traffic between hosts, system events within hosts, or their combination. + +Unlike the other types of IDS such as NIDS and HIDS are categorized by their data sources, DL-IDS is defined by the techniques used in intrusion detection. As shown in Figure 3, the data source of DL-IDS can be network traffic, system events, or both. Taking advantage of the generalizability of DL techniques, DL-IDS is allowed to handle zero-day attacks precisely and thus become extremely interested in the cybersecurity community recently. + +# 3.2 Common Workflow + +Figure 4 depicts the common workflow of DL-IDS. It usually consists of 7 steps: raw data, collection, storage, parsing, summarization, detection, and investigation, which are explained as follows: + +- Raw Data is unprocessed data for uncovering attack details or benign system behaviors. The raw data analyzed by cyber experts commonly include network traffic and audit logs. +- Collection indicates data collection tools for different systems, such as cloud and cross-platforms, which gather valuable raw data to describe important system behavior scenarios. +- Storage involves storage and search engines to manage large amounts of collected log data. Log data is labeled with indexes for efficient retrieval. +- Parsing is the act of analyzing the stored logs and other useful data. It extracts and organizes the underlying information within the data for subsequent processing. + +![](images/7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg) +Fig. 4. Common workflow of DL-IDS. + +- Summarization refers to the operation of summarizing large volumes of parsed data based on its semantics. This reduces storage costs while preserving critical events. +- Detection is the process of using detection tools such as models and algorithms to detect anomalies in analyzed data to determine whether the data contains intrusions. +- Investigation is the further process of Detection. It reconstructs the entire attack scenarios from the detected malicious data by analyzing the causal relationship between them. + +Note that DL-IDS can also be performed in other step orders by skipping some of the steps. For example, log data can be first parsed before storage [135]. Attack investigation can be directly conducted without detection of intrusions [9]. This survey is organized by the common workflow. + +# 4 DATA MANAGEMENT + +This section elaborates on the data management stage of DL-IDS, including data collection (Section 4.1), log storage (Section 4.2), and log parsing (Section 4.3). + +# 4.1 Data Collection + +The first step of DL-IDS is to collect useful data from raw data. Raw data indicates records that document events, activities, and operations that occur within a system, application, or network (a.k.a., logs), represented by audit logs or application logs within hosts, or network traffic between hosts. By collecting useful logs, DL-IDS is allowed to monitor the health condition and operational status of information systems [141, 255]. Common attributes of logs include timestamp, event type, subject, object, description, etc. + +On different platforms, logs possess different formats and organizational structures [21, 127, 255, 270]. To counter this, researchers have created various log collection tools specialized for various systems. For example, in Windows systems, Event Viewer is employed to manage system logs. Yet in Linux systems, log files are usually saved in the /var/log/ directory. The classification of data collection tools is shown in Table 1, including Windows, Linux, Cloud, and Cross platforms. + +Table 1. Log collection tools on different platforms. + +
Platform TypeToolDescription
Windows platformETW [153]Providing developers comprehensive event tracing ability
Panorama [245]Hardware-level and OS-aware dynamic taint tracking
Linux platformauditd [68]Native tools supported by the Linux kernel
sysdig [106]Focusing on runtime monitoring and fault troubleshooting
CamFlow [170]Self-contained, easily maintainable implementation
Tracee [210]Exposing system information as events based on eBPF
DataTracker [200]Monitoring unmodified binaries without their source codes
Inspector [206]Parallel provenance library that is POSIX-compliant
AutoLog [94]Analyzing programs so no need to run them
eAudit [193]Fast, scalable and easily deployable data collection tools
Cloud platformK8S tools [27, 87]Adapting to cloud scenarios to meet enterprise needs
saBPF [129]An extension tool of eBPF for containers in cloud computing
ISDC [158]Eliminating overheads on in-network resources
Cross platformDTrace [66]Real-time tracing framework that supports many platforms
SPADE [61]Novel provenance kernel for cross-platform logging
+ +4.1.1 Windows Platform Tools. Event Tracing for Windows (ETW) [153] is a powerful event tracing mechanism provided by Microsoft. It consists of three components: providers, controllers, and consumers. ETW instruments applications to provide kernel event logging and allows developers to start and stop event tracing sessions momentarily. Panorama [245] exploits hardware-level and OS-aware dynamic taint tracking to collect logs. Moreover, it develops a series of automated tests to detect malware based on several kinds of anomalous behaviors. + +4.1.2 Linux Platform Tools. auditid [68] is a native collection tool supported by the Linux kernel, which is responsible for writing audit logs to disk and monitoring a variety of auditable events such as system calls, file accesses, and modifications. sysdig [106] relies on the kernel module to achieve monitoring and data collection of the system. sysdig focuses on system runtime monitoring and fault troubleshooting, which is also widely used in containers and cloud-native environments. CamFlow [170] designs a self-contained, easily maintainable implementation of whole-system provenance based on Linux Security Module, NetFilter, and other kernel facilities. Furthermore, it provides a mechanism to adapt the captured data provenance to applications and can be integrated across distributed systems. Tracee [210] takes advantage of the extended Berkeley Packet Filter (eBPF) framework to observe systems efficiently. It uses eBPF to tap into systems and expose that information as events. DataTracker [200] is an open-source data provenance collection tool using dynamic instrumentation. It is able to identify data provenance relations of unmodified binaries without access to or knowledge of the source codes. Inspector [206] is a Portable Operating System Interface (POSIX)-compliant data provenance library for shared-memory multi-threaded applications. It is implemented as a parallel provenance algorithm on a concurrent provenance graph. AutoLog [94] generates runtime log sequences by analyzing source codes and does not need to execute any programs. It can efficiently produce log datasets (e.g., over 10,000 messages/min on Java projects) and has the flexibility to adapt to several scenarios. eAudit [193] is a scalable and easily deployable data collection tools. eAudit relies on the eBPF framework built into recent Linux versions, making it work out of the box on most of the Linux distributions. + +4.1.3 Cloud Platform Tools. Although some collection tools in Windows and Linux platforms such as auditd [68], sysdig [106], and Tracee [210] can be applied in cloud computing environment, cloud-native scenarios introduce different challenges compared with Windows or Linux platforms. First, + +there are many different types of components such as containers, microservices, and Kubernetes (K8S) clusters in cloud platforms, each of which generates its own logs with varying formats and contents. Additionally, components are basically characterized by dynamic expansion and contraction, making it hard to capture complete log data. To address them, Chen et al. [27] design a cloud log collection architecture on the basis of K8S, which is a central platform based on cloud-native technology. Josef et al. [87] propose a log collection and analysis tool operated as Software as a Service (SaaS) in the cloud environment in K8S technology, aiming to provide comprehensive logs across all microservices. saBPF [129] is an extension tool of eBPF, aiming to deploy fully-configurable, high-fidelity, system-level audit mechanisms at the granularity of containers. saBPF is further developed with proof-of-concept IDS and access control mechanism to demonstrate its practicability. ISDC [158] is designed to eliminate the bottleneck between network infrastructure (where data is generated) and security application servers (where data is consumed), which prioritizes specific flows to effectively optimize resource consumption. + +4.1.4 Cross-platform Tools. To effectively detect intrusions, an intuitive idea is to incorporate log data from various platforms to obtain a global view of the running system. DTrace [66] is a real-time dynamic tracing framework for troubleshooting kernel and application problems on production systems. It supports many platforms, including Linux, Windows, Solaris, macOS, FreeBSD, NetBSD, etc. Support for Provenance Auditing in Distributed Environments (SPADE) [61] develops a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It supports heterogeneous aggregating for system-level data provenance for data analysis across multiple platforms. + +# 4.2 Log Storage + +The subsequent step of log collection is to store these logs [11, 40]. We will introduce two essential components for data storage: log storage systems and compression algorithms for these systems. + +4.2.1 Log Storage Systems. The two most commonly used log storage systems are ELK [5] and Loki [15]. ELK is a powerful log management solution consisting of three open-source software components: Elasticsearch [48], Logstash [47], and Kibana [49]. Elasticsearch [48] is the leading distributed, RESTful search and analytics data engine designed with speed and scalability. Logstash [47] is a server-side data preprocessing pipeline to collect and integrate data from multiple sources. Kibana [49] is a data analytics and visualization platform at both speed and scale. ELK is powerful enough to be applied in enterprise scenarios, however, its performance comes at a price. ELK sacrifices ease of configuration and installation, and may simultaneously introduce severe runtime overhead for its hosts. In contrast, Loki [15] is a lightweight logging system with low resource overhead developed by Grafana Labs. It is designed with simple operations and efficient storage. Instead of indexing everything of data like ELK does, Loki mainly creates indices grounded in log labels. Moreover, Loki is well suited for open-source monitoring and visualization tools such as Prometheus [174] and Grafana [112]. Integrating these two tools enables Loki to construct a complete monitoring and log analysis platform for information systems. + +4.2.2 Log Compression Algorithms. Logs are generated quickly and require significant memory usage. For example, it is measured that a browser can produce about 10 GB of log data each day [40]. Such oversize data should be compressed before storage. Log compression algorithms can be categorized into two types: general-purpose algorithms and those specifically adapted to log data. + +General Compression Algorithms. General compression algorithms refer to algorithms to reduce the size of data (e.g., log data) by handling token-level or byte-level duplicates in the data. General compression algorithms can be classified into three categories based on their principles [242]: + +Table 2. Well-acknowledged general compression algorithms for log data. + +
TypeWell-acknowledged compression algorithm
Dictionary-basedLZ77 in gzip [55], LZMA in 7zip_lzma [171], and LZSS in quickLZ [177]
Sorting-basedBWT in zip2 [194] andST in szip [190]
Statistical-basedPPMD in 7zip(ppmd and DMC in ocamyd [191]
+ +- Dictionary-based Compression: It records repeated data as keys and replaces these data with their corresponding keys. +- Sorting-based Compression: It sorts data to enable strategies that require ordering features. +- Statistical-based Compression: It exploits statistical techniques to learn and predict the possible next token for existing tokens. The data is thus compressed as a statistical model. + +Table 2 presents representative algorithms of the above three types. Due to the indeterminacy of statistical techniques, statistical-based compression algorithms may introduce losses in compression. Yet the other two types of algorithms are generally lossless. By validating 9 log files and 2 natural language files, a study [242] shows that some general compression algorithms can achieve high compression ratios for log data and log data is even easier to compress than natural language data. + +Tailored Compression Algorithms. Different from natural language data, log data usually has specific structures and formal expressions that help further compression. Yao et al. [243] propose LogBlock, which obtains small log blocks before compression and then uses a generic compressor to compress logs. Liu et al. [135] propose Logzip, which employs clustering algorithms to iteratively extract templates from raw logs and then obtain coherent intermediate representations for compressing logs. Rodrigues et al. [186] propose the lossless compression tool CLP, aiming to quickly retrieve log data while meeting compression requirements. CLP proposes to combine domain-specific compression and search with a generic lightweight compression algorithm. Li et al. [123] conduct empirical research on log data and propose LogShrink to overcome their observed limitations by leveraging the commonality and variability of log data. LogBlock [243] is designed to help existing jobs perform better. It reduces duplicate logs by preprocessing log headers and rearranging log contents, thereby improving the compression ratio of log files. LogReduceer [247] is a framework that combines log hotspot identification and online dynamic log filtering. Its non-intrusive design significantly reduces log storage and runtime overhead. $\mu$ Slope [217] is a compression and search method for semi-structured log data. It achieves efficient storage and query performance through data segmentation, pattern extraction, and index-free design. Denum [249] significantly improves log compression rates by optimizing the compression of digital tokens in logs. It is an efficient log compression tool suitable for scenarios where you need to save storage space or transmission bandwidth. + +# 4.3 Log Parsing + +Log data often originates from multiple different devices such as terminals, sensors, and network devices. To analyze it, log parsers are employed to format them into structured and unified ones. Log parsing is usually executed by data classification and template extraction. Data classification is to classify log data into several groups. Each group constitutes a template for extracting features from log data and constructing the structured logs. As shown in Figure 5, the existing log parsers can be taxonomized into 3 categories: clustering-based, pattern-based, and heuristic-based parsers. + +4.3.1 Clustering-based Parsing. Clustering-based parsers classify data using clustering algorithms for log parsing. Xiao et al. [226] propose LPV, which employs a hierarchical clustering algorithm + +![](images/e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg) +Fig. 5. Taxonomy of data parsing. + +to incrementally group logs based on Euclidean distance. Hamooni et al. [74] present a rapid log pattern recognition approach named LogMine. It is implemented in the map-reduce framework for distributed platforms to process millions of log messages in seconds. LogCluster [130] reduces the number of logs that need to be manually checked and improves the accuracy of problem identification through log clustering and the use of knowledge bases. METING [32] provides a robust and efficient log parsing method through frequent n-gram mining and flexible log grouping strategy, which can effectively process various types of log data. + +4.3.2 Frequency-based Parsing. Frequency-based parsers discover patterns that exceed the frequency threshold and employ the mined patterns to parse logs. Sedki et al. [192] propose the log parsing tool ULP, which combines string matching and local frequency analysis to efficiently parse large log files. Dai et al. [35] propose Logram, which utilizes an n-gram dictionary for log parsing. For n-grams with a frequency below the threshold, Logram recursively converts to (n-1)-grams until a list of uncommon 2-grams is obtained. To mitigate the parameter sensitivity issue in log parsers, Dai et al. [36] further proposed an entropy-based log parser PILAR, which balances parsing accuracy and efficiency. Xu et al. [229] propose a hybrid log parsing model called Hue, which performs parsing through user-adaptive methods. Prefix-Graph [30] is an efficient, adaptive, and universal log parsing method that can stably extract log templates without relying on domain knowledge and manual parameter tuning. + +4.3.3 Heuristic-based Parsing. Heuristic-based parsers rely on empirical knowledge to classify log data. He et al. [82] propose the online log parsing method Drain, which employs a depth-fixed parsing tree to group the original logs and encodes them using specially designed parsing rules. Le et al. [114] propose to use a hint-based few-sample learning algorithm, LogPPT, to capture log template patterns. Utilizing new prompt tuning methods and an adaptive random sampling algorithm, LogPPT performs well on multiple public datasets. Liu et al. [137] propose the UniParser parser to address the issue of difficult processing of heterogeneous logs, using the Token Encoder and Context Encoder modules to learn log context features. Spell [44] is an efficient streaming log parsing method that can dynamically extract log patterns in online processing and significantly improve processing efficiency through pre-filtering steps. Logan [3] achieves efficient and scalable log parsing through distributed processing, LCS matching, dynamic matching tolerance, and periodic merging. USTEP [214] is an online log parsing method based on an evolutionary tree structure that can discover and encode new parsing rules. It achieves constant parsing time and can efficiently parse raw log messages in a streaming manner. + +# 5 INTRUSION DETECTION + +The intrusion detection stage uncovers intrusions relying on the semantic-level information. This section classifies and summarizes the mainstream graph summarization (Section 5.1), attack detection (Section 5.2), and attack investigation (Section 5.3). + +Table 3. Overview of graph summarization approaches. + +
ModeApproachReleaseBaselineRequirement
OfflineProvCompress [228]2011No SummarizationNone
BEEP [115]2013No SummarizationInstrumentation
LogGC [116]2013BEEP + No SummarizationInstrumentation
CPR + PCAR [234]2016No SummarizationNone
FD + SD [89]2018CPR + PCARNone
LogApprox [152]2020GC + CPR + DPRNone
TeRed [122]2025LogGC + CPR + PCAR + F-DPR + NodeMergeNone
OnlineProTracer [143]2016BEEP + No SummarizationInstrumentation
NodeMerge [205]2018No SummarizationNone
Winnower [77]2018No SummarizationNone
GS + SS [267]2021FD + SDNone
SEAL [53]2021FDNone
FAuST [97]2022CPR + DPRNone
AudiTrim [202]2024CPR + GS + F-DPRNone
+ +# 5.1 Graph Summarization + +It is illustrated that stealthy malware will inevitably interact with the underlying OS and be captured by provenance monitoring systems [216], which is the reason why PIDS (a form of DL-IDS) has worked and flourished recently. Log data generated from provenance monitoring systems is referred to as data provenance as mentioned. Offering advantages in high precision, data provenance sacrifices memory performance to record all trails of events from their creations to their current states, even some of which are trivial. Unlike network traffic and application logs, data provenance is fine-grained, detailed, and rich in semantics. As a result, the token-level or byte-level log storage systems (Section 4.2.1) and log compression algorithms (Section 4.2.2) are insufficient to handle the memory efficiency of data provenance due to the absence of semantic-level information. + +To this end, graph summarization is investigated to further reduce the size of log data semantically. In graph summarization, data provenance is transformed into a provenance graph, of which the causal relations are utilized to build the semantic understanding of system activities. Referring to the definition of data provenance (Definition 3.7), provenance graph is defined as follows: + +Definition 5.1. (Provenance Graph). Provenance graph is a representation of a collection of data provenance with causal relations. It is a directed acyclic graph $G = \langle V, E \rangle$ where nodes $V$ are system entities and edges $E$ are system events. + +Provenance graphs allow graph summarization approaches to reduce the size of log data by confidently removing irrelevant events, aggregating similar events, gathering similar execution entities, etc. This categorizes them as a type of lossy reduction, yet the aforementioned log storage and compression are usually lossless (except for statistical-based log compression). We note that some surveys (e.g., [96, 270]) may interchangeably use graph summarization and log compression to identify the approaches that reduce the size of log data. In this work, we explicitly distinguish them and refer to the lossless reduction as compression and the opposite one as summarization. Table 3 presents the overview of graph summarization approaches. We classify them into two categories: offline graph summarization and online graph summarization. + +5.1.1 Offline Graph Summarization. Offline graph summarization requires historical log data to provide global knowledge, which extracts log data from persistent storage, summarizes the data, and pushes back the summarized data to the persistent storage. In 2011, Xie et al. [228] take inspiration from web graphs to summarize provenance graphs. They argue that provenance + +graphs have similar organizational structure and characteristics to web graphs, such as locality, similarity, and consecutiveness. BEEP [115] is developed based on the fact that a long-running execution can be partitioned into individual units. BEEP reverse engineers application binaries and instructions to perform selective logging for unit boundaries and unit dependencies. LogGC [116] is a summarized audit log system that can be invoked at any time during the system execution. Xu et al. [234] propose an aggregation algorithm PCR that preserves event dependencies during log data reduction. They further propose an algorithm named PCAR that utilizes domain knowledge to conduct graph summarization. Hossain et al. [89] propose two dependency-preserving graph summarization approaches, FD and SD. FD is allowed to keep backward and forward forensic analysis results. SD preserves the results of common forensic analysis, which runs backward to find the entry points of intrusions and then runs forward from these points to unveil their impacts. LogApprox [152] aims to summarize the most space-intensive events found in logs, namely file I/O activity, which can account for up to $90\%$ of the log content. TeRed [122] employs unit tests to learn the system's normal behavior patterns for reducing provenance graphs, allowing it not to impact attack detection and investigation. + +5.1.2 Online Graph Summarization. Online graph summarization performs real-time summarization for continually coming provenance graphs, rather than dealing with a static provenance graph. ProTracer [143] alternates between system event logging and unit-level taint propagation. It has a lightweight kernel module and user space daemon for concurrent, out-of-order event processing. NodeMerge [205] is a template-based graph summarization system for online event storage. It can directly work on the system-dependent provenance streams and compress data provenance via read-only file access patterns. Winnower [77] is an extensible audit-based cluster monitoring system. For tasks replicated across nodes in distributed applications, it can define a model over audit logs to concisely summarize the behaviors of multiple nodes, thus eliminating the necessity of transmitting redundant audit records to the central monitoring node. The approach proposed by Zhu et al. [267] includes two real-time graph summarization strategies. The first strategy maintains global semantics, which identifies and removes redundant events that do not affect global dependencies. The second strategy is based on suspicious semantics. SEAL [53] is a novel graph summarization approach for causal analysis. Based on information-theoretic observations of system event data, it achieves lossless compression and supports real-time historical event retrieval. FAuST [97] is a logging daemon that performs transparent and modular graph summarization directly on system endpoints. FAuST consists of modular parsers that parse different audit log formats to create a unified in-memory provenance graph representation. AudiTrim [202] is an efficient graph summarization approach that reduces log sizes without impacting user experiences, which allows adaptable deployment on different operating systems. + +# 5.2 Attack Detection + +Attack detection is located at the central position of DL-IDS. The objective of attack detection is to accurately identify malicious system events in log data while minimizing false alarms of normal system behaviors. Based on the types of log data, we categorize the attack detection approaches into audit log-based, application log-based, network traffic-based, and hybrid log-based detectors. + +The overview and taxonomy of attack detection approaches are presented in Table 4. We note that recent years have also published many other academic papers for attack detection [25, 46, 78, 119, 156, 218, 224, 227, 248]. Yet these papers are slightly related to DL-IDS, which are thus excluded in our survey for conciseness. + +Table 4. Overview and taxonomy of attack detection approaches. + +
Data TypeTaxonomyApproachRelease TimeBase ModelDetection StyleDetection Granularity
Audit LogTraditional LearningStreamSpot [145]2018K-MedoidsOnlineSubgraph
Unicorn [76]2020K-MedoidsOnlineNode, Subgraph
DistDet [42]2023HSTOnlineSubgraph
Velox [18]2025FCNOnlineNode
Graph Neural NetworkShadeWatcher [250]2022TransROfflineNode
threaTrace [219]2022GraphSAGEOnlineNode
ProGrapher [237]2023graph2vecOnlineSubgraph
MAGIC [99]2024GATOnlineNode, Subgraph
Flash [182]2024GraphSAGEOnlineNode
R-caid [65]2024GNNOfflineNode
Argus [230]2024MPNN, GRU-Node
TAPAS [252]2025LSTM-GRUOnlineTask
Application LogTraditional LearningWei et al. [231]2009PCA, TF-IDF-Log Entry
Bodik et al. [19]2010Logistic RegressionOnlineLog Entry
AMOD [43]2018SVM HYBRIDOnlineLog Entry
Sequence Neural NetworkDeepLog [45]2017LSTMOnlineLog Entry
LogRobust [257]2019Attention LSTM-Log Entry
LogAnomaly [151]2019template2vec, LSTMOnlineLog Entry
LogC [246]2020LSTMOnlineLog Entry
NeuralLog [113]2021BERT-Log Entry
PLELog [238]2021Attention GRUOnlineLog Entry
SpikeLog [175]2023DSNN-Log Entry
LogCraft [254]2024Meta Learning-Log Entry
Tweezers [33]2024GATv2, BERTweetOnlineLog Entry
LogSer [23]2024BERTOnlineLog Entry
LogDLR[265]2025Transformer, SBERTOnlineLog Entry
Traffic LogTraditional LearningNetPro [121]2017Merkle Hash TreeOnlineRoute
CATH [72]2019Cusp ModelOnlineFlow
Whisper [56]2021K-Means-Host
SigML++ [211]2023ANN-Encrypted Log
OADSD [253]2023Isolation ForestOnlinePacket
LtRFT [204]2023LambdaMARTOfflinePacket
AGC [225]2025Clustering-Packet
Graph and Sequence Neural NetworkKitsune [159]2018AutoEncoderOnlinePacket
MT-FlowFormer [260]2022Transformer-Flow
I²RNN [199]2022I²RNN-Packet
ERNN [262]2022ERNN-Flow
Euler [108]2023GNN, RNN-Flow
pVoxel [58]2023--Packet, Flow
NetVigil [91]2024E-GraphSage-Flow
Exosphere [57]2024CNN-Packet
DFNet [263]2024DFNet-Packet
RFH-HELAD [264]2024RPGAN, Deep kNN-Packet
ReTrial [259]2024Bayesian InferenceOnlineFlow
HEN [221]2024AE-LSTM-Packet, Flow
TCG-IDS [222]2025TGNOnlineFlow
A-NIDS[251]2025Stacked CTGANOnlineFlow
GTAE-IDS[62]2025Graph TransformerOnlinePacket, Flow
HybridHybridOWAD [75]2024AutoencoderOnlineHybrid
FG-CIBGC [165]2025DisenGCN, ICL-Hybrid
+ +5.2.1 Audit Log-based Detectors. Audit logs are collected from hosts and thus detectors based on them are basically referred to as HIDS. Audit logs provide fine-grained information through provenance graphs to depict system behaviors. Depending on the learning techniques, audit log-based detectors can be further classified as traditional learning and graph neural network. + +Traditional Learning. Traditional learning-based detectors refer to those that utilize naive machine learning techniques. StreamSpot [145] is a clustering-based anomaly detection that tackles challenges in heterogeneity and streaming nature. Unicorn [76] is a real-time intrusion detector that efficiently constructs a streaming histogram to represent the history of system executions. The counting results within the histogram are updated immediately if new edges (or events) occur. DistDet [42] is a distributed detection system that builds host models in the client side, filters false alarms based on their semantics, and derives global models to complement the host models. Velox [18] derives from Orthrus and replaces the complex TGN-based encoder with a simple fully-connected network (FCN), leading to a lightweight and efficient neural network. + +Graph Neural Network. GNN is demonstrated to do well in processing provenance graphs [99, 182, 219, 237, 250]. ProGrapher [237] extracts temporal-ordered provenance graph snapshots from the ingested logs, and applies whole graph embedding and sequence-based learning to capture rich structural properties of them. The key GNN technique leveraged by ProGrapher is graph2vec. ShadeWatcher [250] is a recommendation-guided intrusion detector using provenance graphs. It borrows the recommendation concepts of user-item interactions into security concepts of system entity interactions and analyzes cyber threats in an automated and adaptive manner. threaTrace [219] emerges as an online approach dedicated to detecting host-based threats at the node level. Its GNN model is a tailored GraphSAGE [73] for learning rich contextual information in provenance graphs. MAGIC [99] leverages Graph Attention Network (GAT) [213] as its graph representation module. MAGIC employs masked graph representation learning to incorporate the capability of pretraining. It can adapt to concept drift with minimal computational overhead, making it applicable to real-world online APT detection. Flash [182] is a comprehensive and scalable approach on data provenance graphs to overcome the limitations in accuracy, practicality, and scalability. Flash incorporates a novel adaptation of a GNN-based contextual encoder to encode both local and global graph structures into node embeddings efficiently. R-caid [65] first incorporates root cause analysis into PIDS. Before training GNNs, R-caid links nodes to their root causes to build a new graph, intending to prevent it from mimicry and evasion attacks. Argus [230] finds the performance of the prior IDS is questionable on large scale. It thus devises a form of discrete temporal graph and uses encoder-decoder unsupervised learning to detect different types of attacks. TAPAS [252] leverages a stacked LSTM-GRU model and a task-guided segmentation algorithm to reduce the spatiotemporal dimensions of APT detection, achieving efficient, low-cost, and accurate detection. In addition to the aforementioned detectors, recent researchers have developed numerous useful tools for better understanding audit logs, such as data visualization analysis tool [133] and counterfactual-driven attack explanation generator [223]. + +5.2.2 Application Log-based Detectors. Application logs are generated from the installed binaries. Generally, application logs are in the form of natural language text, namely sequence data. It is thus common to introduce sequence-based DL techniques into application log-based DL-IDS. + +Traditional Learning. For traditional learning, Wei et al. [231] propose a general methodology to mine rich semantic information in console logs to detect large-scale system problems. Bodik et al. [19] leverage a logistic regression model on a new and efficient representation of a datacenter's state called fingerprint to detect previously seen performance crises in that datacenter. AMOD [43] uses the SVM HYBRID strategy to filter query annotations from web request logs and then + +update the stacked generalization detection model to efficiently detect web code injection attacks and obtain malicious queries to update the web application firewall (WAF) library. + +Sequence Neural Network. Due to the similarity between application logs and natural language texts, sequence neural networks such as Recurrent Neural Network [86] and Transformer [39, 212] are widely employed. DeepLog [45] employs LSTM to model system logs as natural language sequences. It is able to automatically learn benign log patterns and detect anomalies when there is a deviation between log patterns and the trained model. LogRobust [257] finds previous methods do not work well under the close-world assumption and utilizes an attention-based LSTM model to handle unstable log events and sequences. LogAnomaly [151] identifies previous studies tend to cause false alarms by using indexes rather than semantics of log templates. Empowered by a novel, simple yet effective method termed template2vec, LogAnomaly is proven to successfully detect both sequential and quantitative log anomalies simultaneously. LogC [246] is a new log-based anomaly detection approach with component-aware analysis. It feeds both log template sequences and component sequences to train a combined LSTM model for detecting anomalous logs. NeuralLog [113] targets the performance caused by log parsing errors such as out-of-vocabulary words and semantic misunderstandings and employ BERT to perform neural representation. PLELog [238] is a semi-supervised anomaly detection approach that can get rid of time-consuming manual labeling and incorporate the knowledge on historical anomalies. SpikeLog [175] adopts a weakly supervised approach to train an anomaly score model, with the objective of handling a more reasonable premise scenario where a large number of logs are unlabeled. LogCraft [254] is an end-to-end unsupervised log anomaly detection framework based on automated machine learning, which mitigates the cost of understanding datasets and makes multiple attempts for building algorithms. Tweezers [33] uses a large language model to identify entities and build a relationship graph, and generates embeddings through graph attention network optimization to achieve security incident detection. LogSer [23] parses logs by preprocessing parameters, splitting logs, tree parsing, and template merging. It then inputs relevant embeddings into BERT training to detect anomalies, generate reports, and perform incremental updates. LogDLR [265] uses SBERT embeddings and a Transformer autoencoder with domain adversarial training to learn domain-invariant features, detecting anomalies via reconstruction error. + +5.2.3 Network Traffic-based Detectors. Network traffic comes from communications between hosts across a computer network. It is ruled by network protocols such as Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) and can be utilized for intrusion detection. Basically, network traffic-based detectors are termed NIDS. + +Traditional Learning. Given the fact that network traffic is usually encrypted for secure communications, feature engineering-guided machine learning is widely applied in NIDS. NetPro [121] employs traceability reasoning with Merkle Hash Trees and digital signatures to detect direct and indirect MANET routing attacks while preserving node privacy, and outputs a traceability graph to identify malicious nodes and behaviors. CATH [72] is a catastrophe-theory-based approach for DoS detection in software-defined networks (SDNs), which leverages the selection, normalization, and fusion of statistical flow attributes to model network states. Whisper [56] pays attention to both high accuracy and high throughput by utilizing frequency domain features. SigML++ [211] is an extension of SigML for supervised anomaly detection approach. SigML++ employs Fully Homomorphic Encryption and Artificial Neural Network (ANN) for detection, resulting in execution without decrypting the logs. OADSD [253] achieves task independently and has the ability of adapting to the environment over SD-WAN by using On-demand Evolving Isolation Forest. LtRFT [204] innovatively introduces Learning-To-Rank scheme for mitigating the low-rate DDoS + +attacks targeted at flow tables. AGC [225] maps the original data into the embedding space through embedding learning to obtain more representative anchor points, thus achieving fine-grained classification of low-quality label data. + +Graph and Sequence Neural Network. In network traffic, packets consist of various contents and their flows can be represented as graphs. As a result, both graph neural network and sequence neural network are adopted in NIDS. Kitsune [159] is a plug and play NIDS that is allowed to detect attacks efficiently on the local network without supervision. It alleviates the problem that network gateways and router devices simply do not have the memory or processing power. MT-FlowFormer [260] is a semi-supervised framework to mitigate the lack of a mechanism for modeling correlations between flows and the requirement of a large volume of manually labeled data. $\mathrm{I}^2\mathrm{RNN}$ [199] is an incremental and interpretable RNN for encrypted traffic classification, which can be efficiently adapted for incremental traffic types. ERNN [262] represents error-resilient RNN, which is a robust and end-to-end RNN model specially designed against network-induced phenomena. Euler [108] accelerates the most memory-intensive part, message-passing stage within GNN, with several concurrently-executed replicated GNNs. pVoxel [58] is an unsupervised method that proposes to leverage point cloud analysis to reduce false positives for the previous NIDS such as Whisper and Kitsune without requiring any prior knowledge on the alarms. NetVigil [91] is specially designed for east-west traffic within data center networks. It utilizes E-GraphSage and contrastive learning techniques to strengthen its resilience. Exosphere [57] detects flooding attacks by analyzing packet length patterns, without investigating any information in encrypted packets. DFNet [263] is a DDoS prevention paradigm denoted by preference-driven and in-network enforced shaping. RFH-HELAD [264] consists of a $K$ classification model based on a deep neural network and a $K + 1$ classification combining GAN and Deep kNN for detecting anomalies in network traffic. ReTrial [259] employs an improved graph attention network with Bayesian and EM algorithms to iteratively correct misleading links, enabling robust detection of encrypted malicious traffic. HEN [221] uses SMOTE to enhance data, trains LightGBM, generates explanations via SHAP, trains AE-LSTM to reconstruct SHAP values, sets a threshold from training errors, and marks test traffic with excess errors as attacks for intrusion detection. TCG-IDS [222] is the first self-supervised temporal contrastive GNN for network intrusion detection, capturing spatiotemporal traffic dependencies with high accuracy and low false alarms. A-NIDS [251] uses a shallow fully connected network for real-time detection and a Stacked CTGAN generator to address catastrophic forgetting and old data storage costs. GTAE-IDS [62] uses a graph autoencoder with a Transformer encoder and DNN decoder to learn benign traffic, enabling label-free, near-real-time intrusion detection and new attack identification. + +5.2.4 Hybrid Log-based Detectors. Based on the above discussions, a natural idea is to combine various types of log data for improving detection capability. OWAD [75] is a general framework to detect, explain, and adapt to normality shifts in practice. OWAD is validated to be effective in various detection granularity, covering provenance graphs, application logs, and network packets. FG-CIBGC [165] mines syncretic semantics in multi-source logs including audit logs, application logs, and network traffic using LLM under in-context learning, which generates behavior graphs for comprehensive analysis. + +# 5.3 Attack Investigation + +Except for identifying individual intrusive nodes, IDS are supposed to detect the full story of intrusions (a.k.a., attack scenario graphs). This process is referred to as attack investigation, which can be done by directly detecting attack scenario graphs [216], or analyzing the causal relations between compromised nodes progressively to construct attack scenario graphs [9, 41, 100, 232]. The attack scenario graphs are defined with scenario graphs as follows: + +Table 5. Overview of attack investigation approaches. + +
TaxonomyApproachRelease TimeAudit LogApplication LogBase ModelStarting NodeInvestigation Granularity
Traditional LearningProvDetector [216]2020doc2vecPath
BehaviorBaseline [269]2025FastTextPath
Sequence Neural NetworkATLAS [9]2021LSTMGraph
LogTracer [166]2022DeepLogPath
ConLBS [118]2023TransformerGraph
AirTag [41]2023BERTGraph
Graph Neural NetworkLiu et al. [134]2022struc2vecGraph
Karios [29]2023GNNGraph
TREC [139]2024GNNGraph
Orthrus [100]2025UniMPPath
Slot [176]2025GNNGraph
FeCoGraph [146]2025GCNGraph
+ +Definition 5.2. (Scenario Graph). Scenario graph is a subgraph of its given provenance graph, which is constructed by the nodes and edges causally dependent on nodes of interest. + +Definition 5.3. (Attack Scenario Graph). Attack scenario graph is a scenario graph where its nodes of interest are compromised nodes. + +In the past, attack investigation is conducted by forward analysis and backward analysis [88]. Forward analysis discovers the influence that nodes of interest will cause and backward analysis traces back how nodes of interest are generated. Benefiting from DL techniques, both forward and backward analysis can be achieved by learning patterns of attack scenario graphs. Furthermore, visual analytics techniques have been widely used to assist security analysts in understanding the causal chain of intrusions [256, 261]. Table 5 summarizes the overview of attack investigation approaches. Similar to Section 5.2, we exclude papers [6, 52, 60, 80, 88, 98, 111, 120, 142, 157, 218, 239, 268] slightly relevant to DL for conciseness. + +Traditional Learning. Unlike detecting intrusive nodes, attack scenario graphs are complicated and thus are hard to handle by traditional learning methods. ProvDetector [216] utilizes doc2vec to learn the embedding representation of paths in the provenance graph. Then a density-based detection is deployed to detect abnormal causal paths in the provenance graph. BehaviorBaseline [269] presents a novel learning-based anomaly detection method for large-scale provenance graphs. It incorporates dynamic graph processing with adaptive encoding and a tag-propagation framework for real-time detection. + +Sequence Neural Network. Log data is in the form of natural language text or is allowed to be transformed into sequences of events, which facilitates the introduction of sequence neural networks. ATLAS [9] is a framework to construct end-to-end attack stories from readily available audit logs, which employs a novel combination of causal analysis and natural language processing. ATLAS exploits LSTM to automatically learn the pattern difference between attack and nonattack sequences. LogTracer [166] is an efficient anomaly tracing framework that combines data provenance and system log detection together. An outlier function with an abnormal decay rate is introduced to improve the accuracy. ConLBS [118] combines a contrastive learning framework and multilayer Transformer network for behavior sequence classification. AirTag [41] employs unsupervised learning to train BERT directly from log texts rather than relying on provenance graphs. AirTag constructs attack scenario graphs by integrating the detected victim nodes. + +Graph Neural Network. To capture causal relations within graphs, GNN is commonly adopted. Liu et al. [134] propose an automated attack detection and investigation method via learning the context semantics of the provenance graph. The provenance graph analyzed by struc2vec captures temporal and causal dependencies of system events. Kairos [29] is a practical intrusion detection and investigation tool based on whole-system provenance. Kairos utilizes GNN to analyze system execution history, so that detects and reconstructs complex APTs. It employs a GNN-based encoder-decoder architecture to learn the temporal evolution of provenance graph structure changes and quantify the abnormal degree of each system event. TREC [139] abstracts APT attack investigation problem as a tactics / techniques recognition problem. TREC trains its model in a few-shot learning manner by adopting a Siamese neural network. Orthurus [100] identifies Quality of Attribution as the key factor contributing to whether or not the industry adopts IDS. It first detects malicious hosts using a GNN encoder and then reconstructs the attack path through dependency analysis. Slot [176], based on provenance graphs and graph reinforcement learning, uncovers hidden relationships among system behaviors, dynamically adapts to new activities and attack strategies, resists adversarial attacks, and automatically constructs attack chains. FeCoGraph [146] directly processes traffic embedding through line graphs to adapt to various GNNs, covering more attack scenarios while protecting data privacy. + +# 6 BENCHMARK DATASETS + +DL-IDS relies on high-quality data to train an effective model. This section introduces the dimensions of datasets (Section 6.1) and some public datasets widely used in DL-IDS (Section 6.2). + +# 6.1 Dimensions of Datasets + +To illustrate the quality of DL-IDS datasets, it is general to use the following dimensions: + +- Benign Scenarios: Benign data should cover benign behaviors and system activities to the greatest extent, enabling DL-IDS to learn patterns of benign behaviors to differentiate malicious behaviors. +- Malicious Scenarios: Malicious data ought to incorporate typical attack scenarios while taking into account the diversity of attacks, including short-term and long-term attacks, as well as simple attacks and multi-stage attacks. +- Ground-truth Labels: Data should be labeled as benign or malicious. For multi-stage attacks, it is useful to indicate the attack type or the attack stage it belongs to. +- Data Granularities: Datasets can be in the form of different granularities. The most accepted one is to provide raw log data. Due to copyright concerns, some replicates [41, 99] merely provide post-processed log data without their processing source codes. +- Operating Systems: The operating system determines the generalizability of the dataset. The more operating systems a dataset covers and the more common they are, the more comprehensively it can evaluate PIDS performance. + +# 6.2 Public Datasets + +Publicly available datasets bring a lot of convenience to research on DL-IDS. However, some researchers use self-made datasets that are not publicly available, making it difficult for other researchers to reuse their datasets [46]. To address this issue, we collect and organize some open-source datasets for further studies, which are listed in Table 6. + +LANL Dataset [103] is collected within the internal computer network of Los Alamos National Laboratory's corporate. The dataset consists of 58 consecutive days of de-identified data, covering about 165 million events from 12 thousand users. To obtain, its data sources include Windows-based + +Table 6. Overview of public datasets. W, L, F, A, M, and S represent the operating system of Windows, Linux, FreeBSD, Android, Mac, and supercomputer, respectively. + +
DatasetReleaseSizeScenariosLabelFormatSystem
LANL Dataset [103]201512 GB-Yes.txtW
StreamSpot [145]20162 GB1Yes.tsvL
AWSCTD [22]201839 GB-NoSQLiteW
DARPA TC E3 [38]2018366 GB [67]6NoCDMW, L, F, A
DARPA TC E5 [38]20192,699 GB [67]8NoCDMW, L, F, A
DARPA OpTC [37]20201,100 GB [13]-NoeCARW
Unicorn SC [76]2020147 GB2YesCDML
CERT Dataset [63, 131]202087 GB-Yes.csvW
LogChunks [20]202024.1 MB-Yes.txt-
Loghub [266]202077 GB--.txtW, L, M, S
ATLAS [9]20210.5 GB10Yes.txtW
ATLASv2 [184]20231210Yes.txtW
ProvSec [197]2023-11Yes.jsonL
AutoLabel [173]2025136 GB29Yes.jsonL
+ +authentication events, process start and stop events, DNS lookups, network flows, and a set of well-defined red teaming events. + +StreamSpot dataset [145] is made up of 1 attack and 5 benign scenarios. The attack scenario exploits a Flash vulnerability and gains root access to the visiting host by visiting a malicious drive-by download URL. The benign scenarios are relevant to normal browsing activity, specifically watching YouTube, browsing news pages, checking Gmail, downloading files, and playing a video game. All the scenarios are simulated through 100 automated tasks with the Selenium RC [208]. + +DARPA TC datasets [38] are sourced from the DARPA Transparent Computing (TC) program, identified by the number of engagements from E1 to E5. Among them, DARPA TC E3 is the most widely used. The TC program aims to make current computing systems transparent by providing high-fidelity visibility during system operations across all layers of software abstraction. Unfortunately, DARPA TC datasets are released without labels, and DARPA makes no warranties as to the correctness, accuracy, or usefulness of the datasets. + +DARPA Operationally Transparent Cyber (OpTC) [37] is a technology transition pilot study funded under Boston Fusion Corporate. The OpTC system architecture is based on the one used in TC program evaluation. In OpTC, every Windows 10 endpoint is equipped with an endpoint sensor that monitors post events, packs them into JSON records, and sends them to Kafka. A translation server aggregates the data into eCAR format and pushes them back to Kafka. OpTC scales TC components from 2 to 1,000 hosts. The dataset consists of approximately 1 TB of compressed JSON data in a highly instrumented environment over two weeks. + +Unicorn SC [76] is a dataset specifically designed for APT detection, proposed by Han et al., authors of the Unicorn model. The dataset includes two supply chain scenarios, wget and shell shock, where each scenario lasts for 3 days to simulate the long-term feature of APT attacks, resulting in provenance data containing 125 benign behaviors and 25 malicious behaviors. The data is saved in the form of provenance graphs, describing the causal relationships during the system execution process. + +CERT Dataset [131] is a collection of synthetic insider threat test datasets that provide both background and malicious actor synthetic data. It is developed by the CERT Division, in collaboration with ExactData, LLC, and under sponsorship from DARPA I2O. CERT dataset learned + +important lessons about the benefits and limitations of synthetic data in the cybersecurity domain and carefully discussed models of realism for synthetic data. + +LogChunks [20] is an application log dataset for build log analysis, containing 797 annotated Travis CI build logs from 80 GitHub repositories and 29 programming languages. These logs are from mature and popular projects, collected through repository, build, and log sampling. Each log in the dataset has manually labeled text blocks of build failure reasons, search keywords, and structural categories, and cross-validated with the original developers with an accuracy of $94.4\%$ . + +Loghub dataset [266] is a large collection of system log datasets, providing 19 real-world log data from various software systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. The objective of Loghub is to fill the significant gap between intelligent automated log analysis techniques and successful deployments in the industry. For the usage scenarios of Loghub, about $35\%$ are anomaly detection, $13\%$ are log analysis, and $8\%$ are security. + +ATLAS dataset [9] implements 10 attacks based on their detailed reports on real-world APT campaigns and generates audit logs in a controlled testbed environment. Among the ten attacks, four are from single host and the rest six are from multiple hosts. All attacks were developed and executed on Windows 7 32-bit virtual machines and took an hour to complete, along with a 24-hour-window audit logs for benign system behaviors. + +ATLASv2 dataset [184] enriches the ATLAS dataset with higher quality background noise and additional logging vantage points. In this dataset, two researchers use the victim machines as their primary work stations throughout the course of engagement, instead of depending on automated scripts to generate activity. System logging, in contrast, cover a five-day period, where the first four days simulate normal work days and the fifth day begins with benign activity then transitions into execution of the corresponding attack. + +ProvSec dataset [197] is created for system provenance forensic analysis. To fulfill data provenance requirements, ProvSec includes the full details of system calls including system parameters. In ProvSec, 11 realistic attack scenarios with real software vulnerabilities and exploits are used and an algorithm to improve the data quality in the system provenance forensics analysis is presented. + +AutoLabel dataset [173] automates fine-grained log labeling by reducing the labeling problem to obtaining an accurate attack subgraph in a provenance graph. Its experiments consist of 29 scenarios, including 25 real CVE vulnerabilities across 12 widely-used applications (spanning 5 programming languages) plus a Sandworm threat simulation by MITRE CTID. + +# 7 CHALLENGES AND FUTURE DIRECTIONS + +After the detailed introduction to the data management stage and the intrusion detection stage, as well as the widely-used benchmark datasets, this section further discusses challenges encountered in existing DL-IDS and summarizes the corresponding visions. These include fundamental resources (Section 7.1), pre-trained large models (Section 7.2), and comprehensive applications (Section 7.3). + +# 7.1 Fundamental Resources + +Effective DL-IDS heavily depends on core fundamental resources such as datasets and computing facilities to develop [105]. Here, we will discuss their challenges one after the other. + +7.1.1 Poor Data Quality. Existing datasets for DL-IDS may contain errors, inaccuracies, or missing values. This leads to unreliable descriptions of system behaviors that may mislead DL-IDS. For example, in some cases of the DARPA TC dataset, the PROCESS object and its source fail to properly resolve conflicts, resulting in possible incorrect transformation. Besides, the acuity_level value of the FLOW object is 0, while the value range for this field in other objects is from 1 to 5. Another + +example could be the LogChunks [20] dataset. In this dataset, the content describing the failure reasons is possibly incomplete. This is because a chunk in LogChunks only contains a continuous substring of the log text and a failure reason may be described across multiple sections of the log. Moreover, LogChunks neglects the classification of failure reasons like test, compilation, and code inspection errors, which hinders further research from analyzing failure reasons. + +Meanwhile, high-quality ground-truth labels are hard to acquire, which is impeded by the contradiction between fine-grained manual labeling and automated label generation. On one hand, for unknown intrusions such as zero-day attacks, it is very labor-intensive for security analysts to correspond each attack scenario to certain log entries, although coarse-grained attack scenarios may have been acquired. The DAPRA TC dataset [38] is a typical example for this. It only provides a ground truth report for attack scenarios, which does not correspond to any specific log entries. Although a few researchers [219] provide the third-party ground-truth labels that are manually identified by themselves, we empirically find some ambiguities between their ground-truth labels and the official attack scenario report. These ambiguities have an obviously negative effect on DL-IDS, and to some extent, they may even cause the accumulation of errors. On the other hand, the development of automated labeling tools is in an awkward position. The log data is generated based on its given prior knowledge of intrusions [28], whereas the challenge of DL-IDS is to detect zero-day intrusions. This tends the development of such automated tools to be somewhat pointless. + +In addition, there are no unified and effective evaluation metrics for DL-IDS [29], which further weakens the potential of datasets. For example, precision, recall, F1 score are usually exploited in most studies [9, 99, 182, 216], while some papers [41] propose to use True Positive Rate (TPR) and False Positive Rate (FPR) as evaluation metrics. This makes the comparison experiments usually unfair and hard to tell if the validation is convincing. We also note that in many cases where the percentage of negatives (or malicious log entries) is low, sacrificing FPR can always significantly increase TPR. For example, sacrificing 1,000 false positives for one true positive might only increase FPR by $0.05\%$ , but would increase TPR by $5\%$ . + +7.1.2 Insufficient Amount of Data. Although log data is generated very quickly (e.g., eBay generates 1.2 PB log data per day by 2018 [189]), DL-IDS is still facing challenges in insufficient amounts of data. Discounting the above data quality issues such as inaccuracies, the reasons are three-fold: + +First, log data has an extremely large number of trivial events, which are proven ineffective and usually removed by graph summarization [237, 250]. For example, data provenance provides fine-grained information about memory-related events, such as data-to-memory mapping and protection of certain memory addresses. These memory-related events basically do not involve attacks, and unfortunately, are always orthogonal to the existing DL-IDS. However, to ensure the completeness requirement of data provenance and to capture very infrequent but inevitable memory attacks, these memory-related events are still recorded in benchmark datasets. As a result, the usable part of each dataset is rather small for DL-IDS, which can be reflected by the high summarization ratio achieved by graph summarization approaches (e.g., $70\%$ [234]). + +The second reason for an insufficient amount of data is the limited dataset representativeness. As observed in Table 6, most of the datasets have no more than 10 attack scenarios, not to mention that each of these attack scenarios has been carefully chosen by their authors. This limited number of attack scenarios suggests that existing datasets are almost impossible to represent the diversified attack methods, as the number of CVE records has already been over 280,000 [31]. Furthermore, the existing datasets such as DAPRA TC E3 [38] are collected in a specific experimental environment and may not cover other types of normal system behaviors, and are proven that a significant amount of synthetic data exists [133]. DARPA TC E5 [38] is unusable for most experiments due to the sparse and error-filled documentation. Unicorn SC [76] is generated by an idealized simulation + +of supply chain scenarios, which means many real-world features are prone to be ignored in this dataset. Hence, training DL-IDS on these non-representative datasets could be a disaster for the computer systems that they protect. + +Finally, the accessibility of datasets further exacerbates the insufficient data problem. Due to privacy and copyright issues, some datasets may be proprietary or difficult to obtain [216, 218]. Moreover, ProvDetector [216] conducted a three-month system evaluation in an enterprise environment with 306 hosts and collected benign provenance data of 23 target programs. Yet this dataset has not been made public, rendering it unavailable to improve other DL-IDS and almost all the assessment settings related to ProvDetector are susceptible to inequity. + +7.1.3 Potential Heavy Computation Requirements. Similar to other DL techniques, DL-IDS also requires a potentially large amount of computing resources to improve their performance. According to [185], the generalizability of neural models is proportional to the investment of computing resources. Supposing that the challenge of insufficient data is mitigated and a large volume of log data is available, more computing resources are inevitably required. Besides, we will illustrate in Section 7.2 that there are plenty of powerful techniques that have not been introduced in DL-IDS, which will also bring in computation requirements. Unfortunately, acceleration methods like parallel computation and efficient retrieval have not been fully scheduled by the cybersecurity community. An example is that the computation time of Unicorn equipped with one core is proven linear to its workloads [76]. It is clear that the efficiency of Unicorn, which is not implemented in parallel, will reach the bottleneck as this core does. +7.1.4 Future Directions. To conclude, the challenges for DL-IDS in fundamental resources consist of data quality, data volume, and computational overhead. Apart from unintentional errors and nontechnical issues in fundamental resources, the research questions that urgently need to be addressed include the contradiction between unaffordable manual labeling and non-generalizable auto-labeling techniques, non-unified benchmark datasets and evaluation metrics, as well as potential heavy computational overheads. Therefore, we summarize the future directions as follows: + +# Future Directions + +- Developing efficient man-machine interactive log labeling mechanisms and organizing open-source data-sharing platforms accordingly to provide large amounts of high-quality datasets. +- Maintaining effective and comprehensive benchmark datasets, accompanied by a unified performance metric framework for a fair comparison. +- Investigating parallel or simplified strategies for DL-IDS, and studying their integration with log storage systems to achieve end-to-end acceleration. + +# 7.2 Pre-training Theories and Techniques + +In recent years, significant progress has been made by Large Language Models (LLMs) in the field of DL. Their capacity to understand and generate dialogue has been greatly enhanced as the model parameters of LLMs keep rising. T5 [179], BERT [39], GPT [178], GPT-4 [2], LaMDA [207], and LLaMA [209] are notable examples. + +With the development of pre-training techniques, LLMs have been adopted in many fields such as finance [258], education [164], medicine [172], and even other domains of cybersecurity [34, 69, 92]. In contrast, the adoption of LLMs in DL-IDS is stagnant, as shown in Figure 6. We can observe that LLMs developed at full speed beginning in 2019. Their prosperity, however, has not extended to DL-IDS. Until now, the only two DL-IDS that incorporate pre-training techniques, AirTag [41] and + +![](images/2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg) +Fig. 6. Interactions between DL models and DL-IDS. While DL models proposed before 2019 have already leveraged in DL-IDS, the emerging LLMs (or pre-training theories and the techniques) since 2020 remains underdeveloped in this domain. + +MAGIC [99], still do not make full use of the potential of LLMs. AirTag pre-trains a BERT model on application logs and detects intrusions in terms of embeddings generated by BERT. MAGIC introduces GraphMAE [90], a model architecture derived from Graph Autoencoder [109] in 2016 but integrated with the famous masked self-supervised learning method [81] in 2022, to conduct self-supervised learning on provenance graphs. MAGIC further designs an adapter to apply the pre-trained model in different detection scenarios. Nevertheless, both AirTag and MAGIC can be regarded as preliminary explorations of pre-training techniques. According to the scaling law [102], the performance of LLMs will steadily improve, as the parameters, data, and computation increase. And the reasoning ability of LLMs will suddenly emerge [220], allowing them to chat with humans smoothly. Such advantageous abilities obviously have not been incorporated into DL-IDS. + +Nowadays, some researchers [7, 59, 125, 160] have started to explore the applications of LLMs on DL-IDS. Yet the theories and techniques of such combination remain challenges. In the following, we will illustrate the identified issues and then point out the future directions. + +7.2.1 Trade-off between Reliability and Generalizability. The governing concern for the employment of LLMs in DL-IDS is reliability (or explainability). Although offering generalizability, LLMs have long been denounced to have issues with hallucinations [149, 241], privacy [84, 240, 244], overreliance [107], and backdoor threats [136]. These unexplainable and uncontrollable features are an absolute disaster for DL-IDS. For example, when feeding log data to LLMs, they sometimes are prone to hallucinate and provide wrong detection results. Attacks thus successfully bypass the detection facilities and can exfiltrate sensitive data in the victim computer systems. Another example for this is that sensitive information may leak from LLMs. Hui et al. [93] present a prompt leakage attack for LLMs, which is demonstrated to be effective in both offline settings and real-world LLM applications. +7.2.2 Short of Statistical Log Modeling. LLMs are developed on the basis of statistical language modeling [101, 187], which is not insufficiently studied for log data. The statistical modeling of natural language can be traced back to the early 1950s when Shannon pioneered the technique of predicting the next element of natural language text [195] and discussed the n-gram model for + +Table 7. Comparison of research advances in statistical modeling of various data. "NL", "PL" and "FL" represent Natural Language, Programming Language, and Formal Language, respectively. Note that PL is a type of FL. + +
DataFormContent Generation RulesStatistical Modeling StudiesPre-training
TextNLGrammar, pragmatics, semantics, etc[101, 148, 187, 196]well-done
SpeechNLText rules (see above) and phonetics[104, 167]well-done
Source codePLLexical and syntactic definitions[8, 85, 180]well-done
LogNL + FLLog template defined by developersfuture workunderdeveloped
+ +English [196]. After that, as machine learning came into view of the NLP research communities, language modeling flourished, and many models such as TreeBank [148], word2vec [154, 155] and LSTM [86] were proposed. Over decades, researchers in NLP have gained solid knowledge of language modeling, whose interests gradually shifted to efficiency. An epoch-making model, Transformer [212], was presented using the multi-head self-attention mechanism to fulfill parallel computing, which was widely exploited in popular pre-trained models such as BERT [39] and GPT [2] afterward. It is evident that the success of LLMs comes from the prolonged studies on statistical language modeling. + +Unfortunately, there are almost no research efforts on statistical modeling of log data, resulting in pre-training techniques of DL-IDS remaining underdeveloped. By contrast, as illustrated in Table 7, the statistical modeling studies of other types of data have already started. Hindle et al. [85] demonstrate that the source code is very repetitive and predictable, and, in fact, even more so than natural language. Driven by such statistical modeling conclusion, DL-based source code applications [54, 70, 124, 126, 203, 233, 235] such as code generation and code clone detection flourish, many of which have already becomes common applications in LLMs. Similar cases can be found for speech data, whose applications are like text to speech [71, 169, 183] and speech recognition [14]. + +We argue that log data is also created by humans, similar to text, speech, and source code. It is generated according to developer-defined log templates, with a form of both natural language (e.g., application logs) and formal language (e.g., data provenance in CDM format). Given the fact that natural language (e.g., text and speech) and formal language (e.g., source code) both exhibit positive performance in pre-training, log data urgently demands statistical modeling achievements to facilitate its pre-training research. Although several works [96, 152] have discussed the features of log data, they are orthogonal to the explainable combination of DL and IDS. Compared with the other data types, challenges in statistical log modeling, for instance, may lie in that logs are extremely long and detailed for reliable purposes. It is very common that the length of one single log entry is the same as that of one paragraph in natural language texts. These challenges happen to be the shortcomings of LLMs - not being able to handle long text and not being trustworthy in generated contents. + +7.2.3 Future Directions. According to the scaling laws [102] and emergent abilities theory [220], as the model size continues to grow, the performance of DL-IDS will increase simultaneously. Thus, increasing the amount of model parameters will be an inevitable trend for DL-IDS. The underlying research questions include the strategies for incorporating existing LLMs in intrusion detection, since it is infeasible to directly leverage unreliable LLMs to detect intrusions, and the theories and techniques for modeling long and detailed log data. We summarize the future directions as follows: + +# Future Directions + +- Investigating how and where to introduce LLMs into DL-IDS like [165], with the objective of balancing the generalizability provided by LLMs and the reliability required by DL-IDS. +- Exploring fundamental statistical modeling theories for log data. On this basis, designing pre-training frameworks for log data and its downstream tasks such as steps within the workflow of DL-IDS (see Section 3.2). + +# 7.3 Comprehensive Applications and Scenarios + +DL-IDS possess abilities that the traditional IDS lack, or are difficult to realize, such as generalizability for zero-day attacks and modeling ability for complicated downstream tasks. We will elaborate on the possible new-style applications and discuss the challenges in and introduced by them. + +7.3.1 Limited Forward and Backward Tracing Scope. Forward tracing and backward tracing are employed in attack investigation, as illustrated in Section 5.3. Under traditional settings, the forward tracing analyzes the influence a symptom node would have on the victim computer system, and the backward tracing discovers the starting node where the vulnerabilities exist [270]. + +We argue that the existing tracing scope is too limited to handle increasingly complicated intrusions and DL-IDS can be defined more broadly. In addition to investigating scenario graphs of intrusions, DL-IDS are supposed to further investigate why these intrusions occur and how to hold back them. The broader definition introduces more downstream tasks that would be difficult to accomplish without the assistance of DL techniques. Based on Definition 3.3, we reformulate the definition of intrusion in a broad sense for DL-IDS as follows: + +Definition 7.1. (Generalized Intrusion). Generalized intrusion is the malicious attempts against a computer, a network, or the corresponding security facilities, whose attributes encompass not only itself but also its underlying root causes and the relevant control measures. + +In this way, the detection of DL-IDS has been extended to the broadly defined intrusions, including their attributes of both root causes and control measures. When executing backward tracing analysis, DL-IDS are not only required to detect the starting symptom nodes of intrusions, but also required to find the root causes of these symptom nodes (i.e., vulnerabilities in source codes). In the forward tracing analysis, except for detecting the symptom nodes affected by intrusions, DL-IDS should perform an in-depth analysis to discover the potentially compromised nodes and provide control measures for handling intrusions. + +Thankfully, several pioneering works have studied similar problems [25, 144]. In AiVl [25], algorithms to bridge log entries and program models are developed using dynamic-static program analysis. Root causes for the exploited vulnerabilities are capable of directly deriving from intrusion detection. Pedro et al. [144] investigate detection and mitigation methods for DDoS attacks, aiming to control them immediately. Additionally, semi-automated adaptive network defense (SAND) [26] leverages SDN to dynamically generate and deploy defense rules. We note that these research attempts are all based on heuristics, either using pre-defined rules to generate root causes, or developing control measures for specific intrusions. Thus, there is a substantial need to introduce advanced DL techniques to this problem. + +7.3.2 Concerns about Data-driven Adversarial Attacks. To validate the detection performance, DL-IDS commonly idealize the experimental data in their threat model. Such idealization, however, leaves DL-IDS with weaknesses that could be exploited by invaders. For example, a common assumption is that no attacks are considered to compromise the security of the log collection + +systems [76, 79, 99, 182], namely log data utilized in DL-IDS is absolutely harmless. But as attacks become more stealthy and complicated, it is impossible to satisfy such an assumption apparently. When DL-IDS encounter intentional data poisoning attacks, prediction backdoors could be easily planted as persistent vulnerabilities. + +The robustness of DL-IDS is also challenged by data-driven evasion attacks. To evade the detection, the malicious behaviors usually mimic the benign ones (a.k.a., mimicry attacks), making them hard to be detected. By early 2002, David et al. [215] have indicated the danger of mimicry attacks on HIDS. Recently, researchers have started to investigate mimicry attacks on DL-IDS [64, 132, 161] and their studies all present effective evasion of detection. From a study [24], DL-IDS can be even plagued by a trivial perturbation in log data. Aware of this issue, R-caid [65] proposes to embed root causes into the detection model for countering adversarial attacks. However, as noted in recent work [64, 65, 161], data-driven attacks still remain a major challenge for DL-IDS. + +7.3.3 Underexplored Promising Scenarios. While DL-IDS show excellent performance in the protection of computer and network systems recently, there are still many promising scenarios for DL-IDS that have not been explored sufficiently. + +Mobile edge computing (MEC) [1, 117, 147] is a typical scenario. In the MEC environment, mobile computing, network control, and storage are pushed at the network edges so as to enable computation-intensive tasks at the resource-limited devices. At the network edges, devices such as Unmanned Aerial Vehicles (UAVs) and New Energy Vehicles (NEVs) usually lack computing power and security facilities, making it difficult to prevent them from intrusions [198]. In the meantime, containerized deployment has become one of the dominant ways to deploy microservices. Detecting intrusions on containers is thus of great importance, for which ReplicaWatcher [46] is a representative work with a special design for microservices. Additionally, industrial networks are characterized by high fidelity, stability, and real-time responsiveness [110], leading to challenges in adapting DL-IDS to their infrastructures. + +7.3.4 Future Directions. Although there has been plenty of research on DL-IDS, many applications and scenarios remain underdeveloped. DL-IDS are sought to be more broadly defined and applied. Based on the above discussion, we briefly summarize the future directions as follows: + +# Future Directions + +- Extending the scope of forward tracing and backward tracing to intrusions in a broad sense, so that generating root causes and control measures for the broadly defined intrusions. +- Understanding data-driven adversarial attacks such as data poisoning attacks and mimicry attacks for devising more robust DL-IDS. +- Applying DL-IDS widely in more underexplored promising scenarios, and if possible, implementing unified frameworks for them. + +# 8 CONCLUSION + +The DL techniques bring reform to IDS, whose generalizability enables them to detect intrusions that have never been encountered before. Recognizing that the IDS development over the past decade primarily comes from DL-IDS, this survey revisits the common workflow for DL-IDS, elaborates each module in the workflow, and taxonomizes the research papers innovatively based on their DL techniques. Publicly available datasets for stimulating future research are introduced subsequently. In addition, from the perspective of DL, this survey digs deep into the potential challenges, emerging trends, and future directions for DL-IDS. The discussions suggest to us that + +DL-IDS are, fascinatingly, in an underdeveloped state. We hope that this survey can somewhat inspire current researchers and facilitate future investigations on DL-IDS. + +# ACKNOWLEDGMENTS + +This research is sponsored in part by the NSFC program (No. 6212780016 and No. 62021002). + +# REFERENCES + +[1] Nasir Abbas, Yan Zhang, Amir Taherkordi, and Tor Skeie. 2017. Mobile Edge Computing: A Survey. IEEE Internet of Things Journal 5, 1 (2017), 450-465. +[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023). +[3] Amey Agrawal, Rohit Karlupia, and Rajat Gupta. 2019. Logan: A Distributed Online Log Parser. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering. IEEE, 1946-1951. +[4] Zeeshan Ahmad, Adnan Shahid Khan, Cheah Wai Shiang, Johari Abdullah, and Farhan Ahmad. 2021. Network Intrusion Detection System: A Systematic Study of Machine Learning and Deep Learning Approaches. Transactions on Emerging Telecommunications Technologies 32, 1 (2021), e4150. +[5] Farrukh Ahmed, Urooj Jahangir, Hamad Rahim, Kamran Ali, et al. 2020. Centralized Log Management Using Elasticsearch, Logstash and Kibana. In Proceedings of the 2020 International Conference on Information Science and Communication Technology. IEEE, 1-7. +[6] Mohannad Alhanahnah, Shiqing Ma, Ashish Gehani, Gabriela F Ciocarlie, Vinod Yegneswaran, Somesh Jha, and Xiangyu Zhang. 2022. autoMPI: Automated Multiple Perspective Attack Investigation with Semantics Aware Execution Partitioning. IEEE Transactions on Software Engineering 49, 4 (2022), 2761-2775. +[7] Tarek Ali. 2024. Next-Generation Intrusion Detection Systems with LLMs: Real-Time Anomaly Detection, Explainable AI, and Adaptive Data Generation. Master's thesis. T. Ali. +[8] Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Computing Surveys 51, 4 (2018), 1-37. +[9] Abdulellah Alsaheel, Yuhong Nan, Shiqing Ma, Le Yu, Gregory Walkup, Z Berkay Celik, Xiangyu Zhang, and Dongyan Xu. 2021. ATLAS: A Sequence-based Learning Approach for Attack Investigation. In Proceedings of the 30th USENIX Security Symposium. 3005-3022. +[10] Adel Alshamrani, Sowmya Myneni, Ankur Chowdhary, and Dijiang Huang. 2019. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Communications Surveys and Tutorials 21, 2 (2019), 1851-1877. https://doi.org/10.1109/COMST.2019.2891891 +[11] Enes Altinisik, Fatih Deniz, and Hürev Taha Sencar. 2023. ProvG-Searcher: A Graph Representation Learning Approach for Efficient Provenance Graph Search. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2247-2261. +[12] Clarivate Analytics. 1997. Web of Science. https://www.webofscience.com +[13] Md Monowar Anjum, Shahrear Iqbal, and Benoit Hamelin. 2021. Analyzing the Usefulness of the DARPA OpTC Dataset in Cyber Threat Detection Research. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 27-32. +[14] Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised Speech Recognition. Advances in Neural Information Processing Systems 34 (2021), 27826-27839. +[15] Elizabeth Bautista, Nitin Sukhija, and Siqi Deng. 2022. Shasta Log Aggregation, Monitoring and Alerting in HPC Environments with Grafana Loki and ServiceNow. In Proceedings of the 2022 IEEE International Conference on Cluster Computing. IEEE, 602-610. +[16] Jack Beerman, David Berent, Zach Falter, and Suman Bhunia. 2023. A Review of Colonial Pipeline Ransomware Attack. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops. IEEE, 8-15. +[17] Tristan Bilot, Nour El Madhoun, Khaldoun Al Agha, and Anis Zouaoui. 2023. Graph Neural Networks for Intrusion Detection: A Survey. IEEE Access 11 (2023), 49114-49139. +[18] Tristan Bilot, Baoxiang Jiang, Zefeng Li, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, and Thomas Pasquier. 2025. Sometimes Simpler is Better: A Comprehensive Analysis of State-of-the-Art Provenance-Based Intrusion Detection Systems. In 34th USENIX Security Symposium (USENIX Security 25). 7193-7212. +[19] Peter Bodik, Moises Goldszmidt, Armando Fox, Dawn B Woodard, and Hans Andersen. 2010. Fingerprinting the Datacenter: Automated Classification of Performance Crises. In Proceedings of the 5th European Conference on Computer Systems. 111-124. + +[20] Carolin E Brandt, Annibale Panichella, Andy Zaidman, and Moritz Beller. 2020. LogChunks: A Data Set for Build Log Analysis. In Proceedings of the 17th International Conference on Mining Software Repositories. 583-587. +[21] Robert A Bridges, Tarrah R Glass-Vanderlan, Michael D Iannacone, Maria S Vincent, and Qian Chen. 2019. A Survey of Intrusion Detection Systems Leveraging Host Data. ACM computing surveys 52, 6 (2019), 1-35. +[22] Dainius Čeponis and Nikolaj Goranin. 2018. Towards A Robust Method of Dataset Generation of Malicious Activity for Anomaly-Based HIDS Training and Presentation of AWSCTD Dataset. *Baltic Journal of Modern Computing* 6, 3 (2018), 217-234. +[23] Xiaolin Chai, Hang Zhang, Jue Zhang, Yan Sun, and Sajal K Das. 2024. Log Sequence Anomaly Detection based on Template and Parameter Parsing via BERT. IEEE Transactions on Dependable and Secure Computing (2024). +[24] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. arXiv preprint arXiv:1810.00069 (2018). +[25] Changhua Chen, Tingzhen Yan, Chenxuan Shi, Hao Xi, Zhirui Fan, Hai Wan, and Xibin Zhao. 2024. The Last Mile of Attack Investigation: Audit Log Analysis towards Software Vulnerability Location. IEEE Transactions on Information Forensics and Security (2024). +[26] Haoyu Chen, Deqing Zou, Hai Jin, Shouhuai Xu, and Bin Yuan. 2022. SAND: Semi-Automated Adaptive Network Defense via Programmable Rule Generation and Deployment. Science China Information Sciences 65, 7 (2022), 172102. +[27] Tao Chen, Haiyan Suo, and Wenqian Xu. 2023. Design of Log Collection Architecture Based on Cloud Native Technology. In Proceedings of the 2023 4th Information Communication Technologies Conference. IEEE, 311-315. +[28] Wenrui Cheng, Qixuan Yuan, Tiantian Zhu, Tieming Chen, Jie Ying, Aohan Zheng, Mingjun Ma, Chunlin Xiong, Mingqi Lv, and Yan Chen. 2025. TAGAPT: Towards Automatic Generation of APT Samples with Provenance-level Granularity. IEEE Transactions on Information Forensics and Security (2025). +[29] Zijun Cheng, Qiujian Lv, Jinyuan Liang, Yan Wang, Degang Sun, Thomas Pasquier, and Xueyuan Han. 2024. Kairos: Practical Intrusion Detection and Investigation Using Whole-System Provenance. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3533–3551. +[30] Guojun Chu, Jingyu Wang, Qi Qi, Haifeng Sun, Shimin Tao, and Jianxin Liao. 2021. Prefix-Graph: A Versatile Log Parsing Approach Merging Prefix Tree with Probabilistic Graph. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering. IEEE, 2411-2422. +[31] The MITRE Corporation. 2025. CVE List. https://github.com/CVEProject/cvelistV5/archive/refs/heads/main.zip +[32] Oihana Coustie, Josiane Mothe, Olivier Teste, and Xavier Baril. 2020. METING: A Robust Log Parser Based on Frequent n-Gram Mining. In Proceedings of the 2020 IEEE International Conference on Web Services. IEEE, 84-88. +[33] Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, and Xiaojing Liao. 2024. Tweezers: A Framework for Security Event Detection via Event Attribution-centric Tweet Embedding. In Proceedings of the Network and Distributed System Security Symposium. +[34] Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, and Hugh Leather. 2025. LLM Compiler: Foundation Language Models for Compiler Optimization. In Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction. 141-153. +[35] Hetong Dai, Heng Li, Che-Shao Chen, Weiyi Shang, and Tse-Hsun Chen. 2020. Logram: Efficient Log Parsing Using $n$ -Gram Dictionaries. IEEE Transactions on Software Engineering 48, 3 (2020), 879-892. +[36] Hetong Dai, Yiming Tang, Heng Li, and Weiyi Shang. 2023. PILAR: Studying and Mitigating the Influence of Configurations on Log Parsing. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 818-829. +[37] DARPA. 2019. Operationally Transparent Cyber Dataset. https://github.com/FiveDirections/OpTC-data +[38] DARPA. 2022. The DARPA Transparent Computing (TC) program Data Release. https://github.com/darpa-i2o/Transparent-Computing +[39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186. +[40] Hailun Ding, Juan Zhai, Dong Deng, and Shiqing Ma. 2023. The Case for Learned Provenance Graph Storage Systems. In Proceedings of the 32nd USENIX Security Symposium. 3277-3294. +[41] Hailun Ding, Juan Zhai, Yuhong Nan, and Shiqing Ma. 2023. AirTag: Towards Automated Attack Investigation by Unsupervised Learning with Log Texts. In Proceedings of the 32nd USENIX Security Symposium. 373-390. +[42] Feng Dong, Liu Wang, Xu Nie, Fei Shao, Haoyu Wang, Ding Li, Xiapu Luo, and Xusheng Xiao. 2023. DistDet: A Cost-Effective Distributed Cyber Threat Detection System. In Proceedings of the 32nd USENIX Security Symposium. 6575–6592. +[43] Ying Dong, Yuqing Zhang, Hua Ma, Qianru Wu, Qixu Liu, Kai Wang, and Wenjie Wang. 2018. An Adaptive System for Detecting Malicious Queries in Web Attacks. Science China Information Sciences 61, 3 (2018), 032114. + +[44] Min Du and Feifei Li. 2016. Spell: Streaming Parsing of System Event Logs. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining. IEEE, 859-864. +[45] Min Du, Feifei Li, Guineng Zheng, and Vivek Srikumar. 2017. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1285-1298. +[46] Asbat El Khairi, Marco Caselli, Andreas Peter, and Andrea Continella. 2024. REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices. In Proceedings of the Network and Distributed System Security Symposium. +[47] Elastic. 2009. Logstash: Collect, parse, and transform logs. https://www.elastic.co/logstash/ +[48] Elastic. 2010. Elasticsearch: The official distributed search & analytics engine. https://www.elastic.co/elasticsearch/ +[49] Elastic. 2013. Kibana: Explore, visualize, and discover data. https://www.elastic.co/kibana/ +[50] Elsevier. 2021. Scopus. https://www.scopus.com/search/form.uri?display=basic{\#}basic +[51] Dave Evans. 2012. The Internet of Everything: How More Relevant and Valuable Connections will Change the World. Cisco IBSG 2012 (2012), 1-9. +[52] Pengcheng Fang, Peng Gao, Changlin Liu, Erman Ayday, Kangkook Jee, Ting Wang, Yanfang Fanny Ye, Zhuotao Liu, and Xusheng Xiao. 2022. Back-Propagating System Dependency Impact for Attack Investigation. In Proceedings of the 31st USENIX Security Symposium. 2461–2478. +[53] Peng Fei, Zhou Li, Zhiying Wang, Xiao Yu, Ding Li, and Kangkook Jee. 2021. SEAL: Storage-Efficient Causality Analysis on Enterprise Logs with Query-Friendly Compression. In Proceedings of the 30th USENIX Security Symposium. +[54] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1536-1547. +[55] Free Software Foundation. 1992. gzip: GNU zip compression utility. https://www.gnu.org/software/gzip/ +[56] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2021. Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 3431-3446. +[57] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2024. Detecting Tunnelled Flooding Traffic via Deep Semantic Analysis of Packet Length Patterns. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3659-3673. +[58] Chuanpu Fu, Qi Li, Ke Xu, and Jianping Wu. 2023. Point Cloud Analysis for ML-based Malicious Traffic Detection: Reducing Majorities of False Positive Alarms. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 1005-1019. +[59] Oscar G. Lira, Alberto Marroquin, and Marco Antonio To. 2024. Harnessing the Advanced Capabilities of LLM for Adaptive Intrusion Detection Systems. In Proceedings of the International Conference on Advanced Information Networking and Applications. Springer, 453-464. +[60] Peng Gao, Xusheng Xiao, Zhichun Li, Fengyuan Xu, Sanjeev R Kulkarni, and Prateek Mittal. 2018. AIQL: Enabling Efficient Attack Investigation from System Monitoring Data. In Proceedings of the 2018 USENIX Annual Technical Conference. 113-126. +[61] Ashish Gehani and Dawood Tariq. 2012. SPADE: Support for Provenance Auditing in Distributed Environments. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing. Springer, 101-120. +[62] Jalal Ghadermazi, Soumyadeep Hore, Ankit Shah, and Nathaniel D Bastian. 2025. GTAE-IDS: Graph Transformer-Based Autoencoder Framework for Real-Time Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025). +[63] Joshua Glasser and Brian Lindauer. 2013. Bridging the gap: A Pragmatic Approach to Generating Insider Threat Data. In Proceedings of the IEEE Symposium on Security and Privacy Workshops. IEEE, 98-104. +[64] Akul Goyal, Xueyuan Han, Gang Wang, and Adam Bates. 2023. Sometimes, You Aren't What You Do: Mimicry Attacks Against Provenance Graph Host Intrusion Detection Systems. In Proceedings of the Network and Distributed System Security Symposium. +[65] Akul Goyal, Gang Wang, and Adam Bates. 2024. R-caid: Embedding Root Cause Analysis within Provenance-Based Intrusion Detection. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3515-3532. +[66] Brendan Gregg and Jim Mauro. 2011. DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD. Prentice Hall Professional. +[67] John Griffith, Derrick Kong, Armando Caro, Brett Benyo, Joud Khoury, Timothy Upthegrove, Timothy Christovich, Stanislav Ponomorov, Ali Sydney, Arjun Saini, et al. 2020. Scalable Transparency Architecture for Research Collaboration (STARC)-DARPA Transparent Computing (TC) Program. *Raytheon BBN Technologies Corporation Cambridge United States* (2020). +[68] Steve Grubb. 2008. Linux audit. https://people.redhat.com/sgrubb/audit/ + +[69] Qiuhan Gu. 2023. LLM-Based Code Generation Method for Golang Compiler Testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2201-2203. +[70] Xiaodong Gu, Meng Chen, Yalan Lin, Yuhan Hu, Hongyu Zhang, Chengcheng Wan, Zhao Wei, Yong Xu, and Juhong Wang. 2025. On the Effectiveness of Large Language Models in Domain-Specific Code Generation. ACM Transactions on Software Engineering and Methodology 34, 3 (2025), 1-22. +[71] Yiwei Guo, Chenpeng Du, Ziyang Ma, Xie Chen, and Kai Yu. 2024. Voiceflow: Efficient Text-to-Speech with Rectified Flow Matching. In Proceedings of the ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 11121-11125. +[72] Yi Guo, Fu Miao, Liancheng Zhang, and Yu Wang. 2019. CATH: An Effective Method for Detecting Denial-of-Service Attacks in Software Defined Networks. Science China Information Sciences 62, 3 (2019), 32106. +[73] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. Advances in Neural Information Processing Systems 30 (2017). +[74] Hossein Hamooni, Biplob Debnath, Jianwu Xu, Hui Zhang, Guofei Jiang, and Abdullah Mueen. 2016. LogMine: Fast Pattern Recognition for Log Analytics. In Proceedings of the ACM International on Conference on Information and Knowledge Management. 1573-1582. +[75] Dongqi Han, Zhiliang Wang, Wenqi Chen, Kai Wang, Rui Yu, Su Wang, Han Zhang, Zhihua Wang, Minghui Jin, Jiahai Yang, et al. 2023. Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation. In Proceedings of the Network and Distributed Systems Security Symposium. +[76] Xueyuan Han, Thomas Pasquier, Adam Bates, James Mickens, and Margo Seltzer. 2020. *Unicorn: Runtime Provenance-Based Detector for Advanced Persistent Threats*. In *Proceedings of the Network and Distributed Systems Security Symposium*. +[77] Wajih Ul Hassan, Lemay Aguse, Nuraini Aguse, Adam Bates, and Thomas Moyer. 2018. Towards Scalable Cluster Auditing through Grammatical Inference over Provenance Graphs. In Proceedings of the Network and Distributed Systems Security Symposium. +[78] Wajih Ul Hassan, Adam Bates, and Daniel Marino. 2020. Tactical Provenance Analysis for Endpoint Detection and Response Systems. In Proceedings of the 2020 IEEE symposium on security and privacy. IEEE, 1172-1189. +[79] Wajih Ul Hassan, Shengjian Guo, Ding Li, Zhengzhang Chen, Kangkook Jee, Zhichun Li, and Adam Bates. 2019. Nodoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Network and Distributed System Security Symposium. +[80] Wajih Ul Hassan, Mohammad Ali Noureddine, Pubali Datta, and Adam Bates. 2020. OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-Layer Log Analysis. In Proceedings of the Network and Distributed System Security Symposium. +[81] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. 2022. Masked Autoencoders are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000-16009. +[82] Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R Lyu. 2017. Drain: An Online Log Parsing Approach with Fixed Depth Tree. In Proceedings of the 2017 IEEE International Conference on Web Services. IEEE, 33-40. +[83] Shilin He, Pinjia He, Zhuangbin Chen, Tianyi Yang, Yuxin Su, and Michael R. Lyu. 2020. A Survey on Automated Log Analysis for Reliability Engineering. ACM Computing Surveys 54 (2020), 1 - 37. https://api-semanticscholar.org/CorpusID:221703032 +[84] Xinlei He, Guowen Xu, Xingshuo Han, Qian Wang, Lingchen Zhao, Chao Shen, Chenhao Lin, Zhengyu Zhao, Qian Li, Le Yang, et al. 2025. Artificial intelligence security and privacy: a survey. Science China Information Sciences 68, 8 (2025), 1-90. +[85] Abram Hindle, Earl T Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the Naturalness of Software. Commun. ACM 59, 5 (2016), 122-131. +[86] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735-1780. +[87] Josef Horalek, Patrik Urbanik, Vladimir Sobeslav, and Tomas Svoboda. 2022. Proposed Solution for Log Collection and Analysis in Kubernetes Environment. In Proceedings of the International Conference on Nature of Computation and Communication. Springer, 9-22. +[88] Md Nahid Hossain, Sadegh M Milajerdi, Junao Wang, Birhanu Eshete, Rigel Gjomemo, R Sekar, Scott Stoller, and VN Venkatakrishnan. 2017. Sleuth: Real-time Attack Scenario Reconstruction from COTS Audit Data. In Proceedings of the USENIX Security Symposium. 487-504. +[89] Md Nahid Hossain, Junao Wang, Ofir Weisse, R Sekar, Daniel Genkin, Boyuan He, Scott D Stoller, Gan Fang, Frank Piessens, Evan Downing, et al. 2018. Dependence-Preserving Data Compaction for Scalable Forensic Analysis. In Proceedings of the 27th USENIX Security Symposium. 1723-1740. + +[90] Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. 2022. GraphMAE: Self-Supervised Masked Graph Autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594-604. +[91] Kevin Hsieh, Mike Wong, Santiago Segarra, Sathiya Kumaran Mani, Trevor Eberl, Anatoliy Panasyuk, Ravi Netravali, Ranveer Chandra, and Srikanth Kandula. 2024. NetVigil: Robust and Low-Cost Anomaly Detection for East-West Data Center Security. In Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation. 1771-1789. +[92] Peiwei Hu, Ruigang Liang, and Kai Chen. 2024. DeGPT: Optimizing Decompile Output with LLM. In Proceedings of the Network and Distributed System Security Symposium. +[93] Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, and Yinzhi Cao. 2024. Pleak: Prompt Leaking Attacks Against Large Language Model Applications. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3600-3614. +[94] Yintong Huo, Yichen Li, Yuxin Su, Pinjia He, Zifan Xie, and Michael R Lyu. 2023. AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection. In Proceedings of the 2023 38th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 497-509. +[95] IEEE. 2000. IEEE Xplore Digital Library. https://ieeexplore.ieee.org +[96] Muhammad Adil Inam, Yinfang Chen, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih Ul Hassan. 2023. SoK: History is a Vast Early Warning System: Auditing the Provenance of System Intrusions. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. 2620-2638. https://doi.org/10.1109/SP46215.2023.10179405 +[97] Muhammad Adil Inam, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih UI Hassan. 2022. FAuST: Striking A Bargain between Forensic Auditing's Security and Throughput. In Proceedings of the 38th Annual Computer Security Applications Conference. 813-826. +[98] Yang Ji, Sangho Lee, Evan Downing, Weiren Wang, Mattia Fazzini, Taesoo Kim, Alessandro Orso, and Wenke Lee. 2017. Rain: Refinable Attack Investigation with On-demand Inter-Process Information Flow Tracking. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 377–390. +[99] Zian Jia, Yun Xiong, Yuhong Nan, Yao Zhang, Jinjing Zhao, and Mi Wen. 2024. MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning. In Proceedings of the 33rd USENIX Security Symposium. 5197-5214. +[100] Baoxiang Jiang, T Bilot, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, Shahrear Iqbal, Xueyuan Han, and Thomas Pasquier. 2025. Orthrus: Achieving High Quality of Attribution in Provenance-based Intrusion Detection Systems. In Proceedings of the USENIX Security Symposium. +[101] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410 (2016). +[102] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 (2020). +[103] Alexander D. Kent. 2015. Comprehensive, Multi-Source Cyber-Security Events. Los Alamos National Laboratory. https://doi.org/10.17021/1179829 +[104] LG Kersta, PD Bricker, and EE David Jr. 1960. Human or Machine?—A Study of Voice Naturalness. The Journal of the Acoustical Society of America 32, 11_Supplement (1960), 1502-1502. +[105] Ansam Khraisat, Iqbal Gondal, Peter Vamplew, and Joarder Kamruzzaman. 2019. Survey of Intrusion Detection Systems: Techniques, Datasets and Challenges. Cybersecurity 2, 1 (2019), 1-22. +[106] Aaron Kili. [n.d.]. Sysdig-A Powerful System Monitoring and Troubleshooting Tool for Linux. +[107] Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 822-835. +[108] Isaiah J King and H Howie Huang. 2023. Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction. ACM Transactions on Privacy and Security 26, 3 (2023), 1-36. +[109] Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv preprint arXiv:1611.07308 (2016). +[110] Eric D Knapp. 2024. Industrial Network Security: Securing Critical Infrastructure Networks for Smart Grid, SCADA, and other Industrial Control Systems. Elsevier. +[111] Yonghwi Kwon, Fei Wang, Weihang Wang, Kyu Hyung Lee, Wen-Chuan Lee, Shiqing Ma, Xiangyu Zhang, Dongyan Xu, Somesh Jha, Gabriela Ciocarlie, et al. 2018. MCI: Modeling-based Causality Inference in Audit Logging for Attack Investigation. In Proceedings of the Network and Distributed Systems Security Symposium. +[112] Grafana Labs. 2014. Grafana: The Open Observability Platform. https://grafana.com/ +[113] Van-Hoang Le and Hongyu Zhang. 2021. Log-Based Anomaly Detection without Log Parsing. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 492-504. + +[114] Van-Hoang Le and Hongyu Zhang. 2023. Log Parsing with Prompt-Based Few-Shot Learning. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 2438-2449. +[115] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. High Accuracy Attack Provenance via Binary-based Execution Partition. In Proceedings of the Network and Distributed System Security Symposium, Vol. 16. +[116] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. LogGC: Garbage Collecting Audit Log. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security. 1005-1016. +[117] Huanruo Li, Yunfei Guo, Shumin Huo, Hongchao Hu, and Penghao Sun. 2022. Defensive Deception Framework Against Reconnaissance Attacks in the Cloud with Deep Reinforcement Learning. Science China Information Sciences 65, 7 (2022), 170305. +[118] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ConLBS: An Attack Investigation Approach Using Contrastive Learning with Behavior Sequence. Sensors 23, 24 (2023), 9881. +[119] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ProvGRP: A Context-Aware Provenance Graph Reduction and Partition Approach for Facilitating Attack Investigation. *Electronics* 13, 1 (2023), 100. +[120] Shaofei Li, Feng Dong, Xusheng Xiao, Haoyu Wang, Fei Shao, Jiedong Chen, Yao Guo, Xiangqun Chen, and Ding Li. 2024. NodLink: An Online System for Fine-Grained APT Attack Detection and Investigation. In Proceedings of the Network and Distributed System Security Symposium. +[121] Teng Li, Jianfeng Ma, and Cong Sun. 2017. NetPro: Detecting Attacks in MANET Routing with Provenance and Verification. Science China Information Sciences 60, 11 (2017), 118101. +[122] Xiaoxiang Li, Xinyu Jiang, Hai Wan, and Xinbin Zhao. 2025. TeRed: Normal Behavior-Based Efficient Provenance Graph Reduction for Large-Scale Attack Forensics. IEEE Transactions on Information Forensics and Security (2025). +[123] Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, and Pengfei Chen. 2024. LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1-12. +[124] Yujia Li, David Choi, Junyoung Chung, Nate Kushner, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-Level Code Generation with Alphacode. Science 378, 6624 (2022), 1092-1097. +[125] Yanjie Li, Zhen Xiang, Nathaniel D Bastian, Dawn Song, and Bo Li. 2024. IDS-Agent: An LLM Agent for Explanable Intrusion Detection in IoT Networks. In Proceedings of the NeurIPS 2024 Workshop on Open-World Agents. +[126] Yuanlin Li, Zhiwei Xu, Min Zhou, Hai Wan, and Xibin Zhao. 2024. Trident: Detecting SQL Injection Attacks via Abstract Syntax Tree-based Neural Network. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 2225-2229. +[127] Zhenyuan Li, Qi Alfred Chen, Runqing Yang, Yan Chen, and Wei Ruan. 2021. Threat Detection and Investigation with System-Level Provenance Graphs: A Survey. Computer and Security 106, C (jul 2021), 16 pages. https://doi.org/10.1016/j.cose.2021.102282 +[128] Hung-Jen Liao, Chun-Hung Richard Lin, Ying-Chih Lin, and Kuang-Yuan Tung. 2013. Intrusion Detection System: A Comprehensive Review. Journal of Network and Computer Applications 36, 1 (2013), 16-24. +[129] Soo Yee Lim, Bogdan Stelea, Xueyuan Han, and Thomas Pasquier. 2021. Secure Namespaced Kernel Audit for Containers. In Proceedings of the ACM Symposium on Cloud Computing. 518-532. +[130] Qingwei Lin, Hongyu Zhang, Jian-Guang Lou, Yu Zhang, and Xuewei Chen. 2016. Log Clustering Based Problem Identification for Online Service Systems. In Proceedings of the International Conference on Software Engineering Companion. 102-111. +[131] Brian Lindauer. 2020. Insider Threat Test Dataset. (9 2020). https://doi.org/10.1184/R1/12841247.v1 +[132] Guangrui Liu, Weizhe Zhang, Xinjie Li, Kaisheng Fan, and Shui Yu. 2022. VulnERGAN: A Backdoor Attack through Vulnerability Amplification against Machine Learning-Based Network Intrusion Detection Systems. Science China Information Sciences 65, 7 (2022), 170303. +[133] Jason Liu, Muhammad Adil Inam, Akul Goyal, Andy Riddle, Kim Westfall, and Adam Bates. 2025. What We Talk About When We Talk About Logs: Understanding the Effects of Dataset Quality on Endpoint Threat Detection Research. In Proceedings of the 2025 IEEE Symposium on Security and Privacy. IEEE, 112-129. +[134] Jian Liu, Junjie Yan, Zhengwei Jiang, Xuren Wang, and Jun Jiang. 2022. A Graph Learning Approach with Audit Records for Advanced Attack Investigation. In Proceedings of the IEEE Global Communications Conference. IEEE, 897-902. +[135] Jinyang Liu, Jieming Zhu, Shilin He, Pinjia He, Zibin Zheng, and Michael R Lyu. 2019. Logzip: Extracting Hidden Structures via Iterative Clustering for Log Compression. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 863-873. +[136] Shuai Liu, Yiheng Pan, Kun Hong, Ruite Fei, Chenhao Lin, Qian Li, and Chao Shen. 2025. Backdoor Threats in Large Language Models—A Survey. Science China Information Sciences 68, 9 (2025), 1-34. + +[137] Yudong Liu, Xu Zhang, Shilin He, Hongyu Zhang, Liquin Li, Yu Kang, Yong Xu, Minghua Ma, Qingwei Lin, Yingnong Dang, et al. 2022. UniParser: A Unified Log Parser for Heterogeneous Log Data. In Proceedings of the ACM Web Conference. 1893-1901. +[138] Scott Lupton, Hironori Washizaki, Nobukazu Yoshioka, and Yoshiaki Fukazawa. 2021. Literature Review on Log Anomaly Detection Approaches Utilizing Online Parsing Methodology. In Proceedings of the 2021 28th Asia-Pacific Software Engineering Conference. 559-563. https://doi.org/10.1109/APSEC53868.2021.00068 +[139] Mingqi Lv, HongZhe Gao, Xuebo Qiu, Tieming Chen, Tiantian Zhu, Jinyin Chen, and Shouling Ji. 2024. TREC: APT Tactic/Technique Recognition via Few-Shot Provenance Subgraph Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 139-152. +[140] Yang Lv, Shaona Qin, Zifeng Zhu, Zhuocheng Yu, Shudong Li, and Weihong Han. 2022. A Review of Provenance Graph based APT Attack Detection: Applications and Developments. In Proceedings of the 2022 7th IEEE International Conference on Data Science in Cyberspace. 498-505. https://doi.org/10.1109/DSC55868.2022.00075 +[141] Shiqing Ma, Juan Zhai, Yonghwi Kwon, Kyu Hyung Lee, Xiangyu Zhang, Gabriela Ciocarlie, Ashish Gehani, Vinod Yegneswaran, Dongyan Xu, and Somesh Jha. 2018. Kernel-Supported Cost-Effective Audit Logging for Causality Tracking. In Proceedings of the 2018 USENIX Annual Technical Conference. 241-254. +[142] Shiqing Ma, Juan Zhai, Fei Wang, Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2017. MPI: Multiple Perspective Attack Investigation with Semantic Aware Execution Partitioning. In Proceedings of the 26th USENIX Security Symposium. 1111-1128. +[143] Shiqing Ma, Xiangyu Zhang, and Dongyan Xu. 2016. ProTracer: Towards Practical Provenance Tracing by Alternating between Logging and Tainting. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium. +[144] Pedro Manso, José Moura, and Carlos Serrão. 2019. SDN-Based Intrusion Detection System for Early Detection and Mitigation of DDoS Attacks. Information 10, 3 (2019), 106. +[145] Emaad Manzoor, Sadegh M Milajerdi, and Leman Akoglu. 2016. Fast Memory-Efficient Anomaly Detection in Streaming Heterogeneous Graphs. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1035-1044. +[146] Qinghua Mao, Xi Lin, Wenchao Xu, Yuxin Qi, Xiu Su, Gaolei Li, and Jianhua Li. 2025. FeCoGraph: Label-Aware Federated Graph Contrastive Learning for Few-Shot Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025). +[147] Yuyi Mao, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B Letaief. 2017. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Communications Surveys and Tutorials 19, 4 (2017), 2322-2358. +[148] Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building A Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19, 2 (1993), 313-330. +[149] Ariana Martino, Michael Iannelli, and Coleen Truong. 2023. Knowledge Injection to Counter Large Language Model (LLM) Hallucination. In European Semantic Web Conference. Springer, 182-185. +[150] Ines Martins, Joao S Resende, Patricia R Sousa, Simao Silva, Luis Antunes, and Joao Gama. 2022. Host-based IDS: A Review and Open Issues of An Anomaly Detection System in IoT. Future Generation Computer Systems 133 (2022), 95-113. +[151] Weibin Meng, Ying Liu, Yichen Zhu, Shenglin Zhang, Dan Pei, Yuqing Liu, Yihao Chen, Ruizhi Zhang, Shimin Tao, Pei Sun, et al. 2019. LogAnomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs. In Proceedings of the International Joint Conference on Artificial Intelligence, Vol. 19. 4739-4745. +[152] Noor Michael, Jaron Mink, Jason Liu, Sneha Gaur, Wajih Ul Hassan, and Adam Bates. 2020. On the Forensic Validity of Approximated Audit Logs. In Proceedings of the 36th Annual Computer Security Applications Conference. 189-202. +[153] Microsoft. [n.d]. Event Tracing - Win32 apps. https://learn.microsoft.com/en-us/windows/win32/etw/event-tracing-portal. 2020. +[154] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781 (2013). +[155] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. Advances in Neural Information Processing Systems 26 (2013). +[156] Sadegh M Milajerdi, Birhanu Eshete, Rigel Gjomemo, and VN Venkatakrishnan. 2019. Poirot: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1795-1812. +[157] Sadegh M Milajerdi, Rigel Gjomemo, Birhanu Eshete, Ramachandran Sekar, and VN Venkatakrishnan. 2019. Holmes: Real-time APT Detection through Correlation of Suspicious Information Flows. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. IEEE, 1137-1152. +[158] Seyed Mohammad Mehdi Mirnajafizadeh, Ashwin Raam Sethuram, David Mohaisen, DaeHun Nyang, and Rhongho Jang. 2024. Enhancing Network Attack Detection with Distributed and In-Network Data Collection System. In Proceedings of the 33rd USENIX Security Symposium. 5161-5178. + +[159] Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Proceedings of the Network and Distributed Systems Security Symposium (2018). +[160] Kunal Mukherjee and Murat Kantarcioglu. 2025. LLM-driven Provenance Forensics for Threat Investigation and Detection. arXiv preprint arXiv:2508.21323 (2025). +[161] Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, James Wei, Feng Chen, Muhyun Kim, Murat Kantarcioglu, and Kangkook Jee. 2023. Evading Provenance-Based ML Detectors with Adversarial System Actions. In Proceedings of the 32nd USENIX Security Symposium. 1199-1216. +[162] Muhammad Hassan Nasir, Salman A Khan, Muhammad Mubashir Khan, and Mahawish Fatima. 2022. Swarm Intelligence Inspired Intrusion Detection Systems—A Systematic Literature Review. Computer Networks 205 (2022), 108708. +[163] Mostafa Nassar, Nirmeen A El-Bahnasawy, HossamEl-Din H Ahmed, Adel A Saleeb, and Fathi E Abd El-Samie. 2019. Network Intrusion Detection, Literature Review and Some Techniques Comparison. In Proceedings of the 2019 15th International Computer Engineering Conference. IEEE, 62-71. +[164] Alexander Tobias Neumann, Yue Yin, Sulayman Sowe, Stefan Decker, and Matthias Jarke. 2024. An LLM-Driven Chatbot in Higher Education for Databases and Information Systems. IEEE Transactions on Education (2024). +[165] Zhibin Ni, Pan Fan, Shengzhuo Dai, Bo Zhang, Hai Wan, and Xibin Zhao. 2025. FG-CIBGC: A Unified Framework for Fine-Grained and Class-Incremental Behavior Graph Classification. In Proceedings of the Web Conference. +[166] Weina Niu, Zhenqi Yu, Zimu Li, Beibei Li, Runzi Zhang, and Xiaosong Zhang. 2022. LogTracer: Efficient Anomaly Tracing Combining System Log Detection and Provenance Graph. In Proceedings of the IEEE Global Communications Conference. IEEE, 3356-3361. +[167] Christine Nussbaum, Sascha Frühholz, and Stefan R Schweinberger. 2025. Understanding Voice Naturalness. Trends in Cognitive Sciences (2025). +[168] Connected Papers. 2020. Connected Papers: A Visual Tool for Researchers. https://wwwconnectedpapers.com +[169] Nohil Park, Heeseung Kim, Che Hyun Lee, Jooyoung Choi, Jiheum Yeom, and Sungroh Yoon. 2025. NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1-5. +[170] Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer, and Jean Bacon. 2017. Practical Whole-System Provenance Capture. In Proceedings of the 2017 Symposium on Cloud Computing. 405-418. +[171] Igor Pavlov. 2001. LZMA SDK (Software Development Kit). https://www.7-zip.org/ +[172] Cheng Peng, Xi Yang, Aokun Chen, Kaleb E Smith, Nima PourNejatian, Anthony B Costa, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al. 2023. A Study of Generative Large Language Model For Medical Research and Healthcare. NPJ Digital Medicine 6, 1 (2023), 210. +[173] Yihao Peng, Tongxin Zhang, Jieshao Lai, Yuxuan Zhang, Yiming Wu, Hai Wan, and Xibin Zhao. 2025. AutoLabel: Automated Fine-Grained Log Labeling for Cyber Attack Dataset Generation. In 34th USENIX Security Symposium (USENIX Security 25). 547-566. +[174] Prometheus. 2014. Prometheus - Monitoring System & Time Series Database. https://prometheus.io/ +[175] Jiaxing Qi, Zhongzhi Luan, Shaohan Huang, Carol Fung, Hailong Yang, and Depei Qian. 2023. SpikeLog: Log-based Anomaly Detection via Potential-Assisted Spiking Neuron Network. IEEE Transactions on Knowledge and Data Engineering 36, 12 (2023), 9322-9335. +[176] Wei Qiao, Yebo Feng, Teng Li, Zhuo Ma, Yulong Shen, JianFeng Ma, and Yang Liu. 2025. Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning. In Proceedings of the 2025 on ACM SIGSAC Conference on Computer and Communications Security. +[177] QuickLZ. 2006. QuickLZ: Fastest Compression Library. http://wwwquicklz.com/ +[178] Alec Radford. 2018. Improving Language Understanding by Generative Pre-Training. (2018). +[179] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with A Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1-67. +[180] Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. 2016. On the "Naturalness" of Buggy Code. In Proceedings of the 38th International Conference on Software Engineering. 428-439. +[181] Bace Rebecca and Peter Mell. 2001. Intrusion Detection Systems. National Institute of Standards and Technology, Special Publication (2001). +[182] Mati Ur Rehman, Hadi Ahmadi, and Wajih Ul Hassan. 2024. FLASH: A Comprehensive Approach to Intrusion Detection via Provenance Graph Representation Learning. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE Computer Society, 139-139. +[183] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, Robust and Controllable Text to Speech. Advances in Neural Information Processing Systems 32 (2019). + +[184] Andy Riddle, Kim Westfall, and Adam Bates. 2023. Atlasv2: Atlas attack engagements, version 2. arXiv preprint arXiv:2401.01341 (2023). +[185] Malajah Roberts, Jonathan Anderson, William Delgado, Richard Johnson, and Lawrence Spencer. 2024. Extending Contextual Length and World Knowledge Generalization in Large Language Models. (2024). +[186] Kirk Rodrigues, Yu Luo, and Ding Yuan. 2021. CLP: Efficient and Scalable Search on Compressed Text Logs. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. 183-198. +[187] Ronald Rosenfeld. 2000. Two Decades of Statistical Language Modeling: Where Do We Go from Here? Proceedings of the IEEE 88, 8 (2000), 1270-1278. +[188] Tejaswini S and Azra Nasreen. 2021. Survey on Online Log Parsing. Regular issue (2021). https://api-semanticscholar.org/CorpusID:236861650 +[189] Vijay Samuel. 2018. Monitoring Anything and Everything with Beats at eBay.(2018). (2018). +[190] Michael Schindler. 1999. SZIP Compression. http://www.compressconsult.com/szip/ +[191] Frank Schwellinger. 2008. Ocamyd: A File (De-)Compressor Based on the DMC Algorithm. https://www.geocities.ws/ocamyd/ +[192] Issam Sedki, Abdelwahab Hamou-Lhadj, Otmane Ait-Mohamed, and Mohammed A Shehab. 2022. An Effective Approach for Parsing Large Log Files. In Proceedings of the 2022 IEEE International Conference on Software Maintenance and Evolution. IEEE, 1-12. +[193] R Sekar, Hanke Kimm, and Rohit Aich. 2024. eAudit: A Fast, Scalable and Deployable Audit Data Collection System. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 3571-3589. +[194] Julian Seward. 1996. bzip2: A High-Quality Data Compressor. http://www.bzip.org/ +[195] Claude E Shannon. 1948. A Mathematical Theory of Communication. The Bell System Technical Journal 27, 3 (1948), 379-423. +[196] Claude E Shannon. 1951. The Redundancy of English. In Cybernetics; Transactions of the 7th Conference, New York: Josiah Macy, Jr. Foundation. 248-272. +[197] Madhukar Shrestha, Yonghyun Kim, Jeehyun Oh, Junghwan Rhee, Yung Ryn Choe, Fei Zuo, Myungah Park, and Gang Qian. 2023. ProvSec: Open Cybersecurity System Provenance Analysis Benchmark Dataset with Labels. International Journal of Networked and Distributed Computing 11, 2 (2023), 112-123. +[198] Rakesh Shrestha, Atefeh Omidkar, Sajjad Ahmadi Roudi, Robert Abbas, and Shiho Kim. 2021. Machine-Learning-Enabled Intrusion Detection System for Cellular Connected UAV Networks. *Electronics* 10, 13 (2021), 1549. +[199] Zhuoxue Song, Ziming Zhao, Fan Zhang, Gang Xiong, Guang Cheng, Xinjie Zhao, Shize Guo, and Binbin Chen. 2022. I²RNN: An Incremental and Interpretable Recurrent Neural Network for Encrypted Traffic Classification. IEEE Transactions on Dependable and Secure Computing (2022). +[200] Manolis Stamatogiannakis, Paul Groth, and Herbert Bos. 2015. Looking Inside the Black-Box: Capturing Data Provenance Using Dynamic Instrumentation. In Provenance and Annotation of Data and Processes: 5th International Provenance and Annotation Workshop, IPAW 2014, Cologne, Germany, June 9-13, 2014. Revised Selected Papers 5. Springer, 155-167. +[201] Branka Stojanovic, Katharina Hofer-Schmitz, and Ulrike Kleb. 2020. APT Datasets and Attack Modeling for Automated Detection Methods: A Review. Computer Security 92 (2020), 101734. https://apisemantic scholar.org/CorpusID:213320542 +[202] Hongbin Sun, Su Wang, Zhiliang Wang, Zheyu Jiang, Dongqi Han, and Jiahai Yang. 2024. AudiTrim: A Real-time, General, Efficient, and Low-overhead Data Compaction System for Intrusion Detection. In Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses. 263-277. +[203] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntellicodeCompose: Code Generation Using Transformer. In Proceedings of the 28th ACM joint meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1433-1443. +[204] Dan Tang, Yudong Yan, Chenjun Gao, Wei Liang, and Wenqiang Jin. 2023. LtRFT: Mitigate the Low-Rate Data Plane DDoS Attack with Learning-to-Rank Enabled Flow Tables. IEEE Transactions on Information Forensics and Security 18 (2023), 3143-3157. +[205] Yutao Tang, Ding Li, Zhichun Li, Mu Zhang, Kangkook Jee, Xusheng Xiao, Zhenyu Wu, Junghwan Rhee, Fengyuan Xu, and Qun Li. 2018. NodeMerge: Template Based Efficient Data Reduction for Big-Data Causality Analysis. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 1324–1337. +[206] Joerg Thalheim, Pramod Bhatotia, and Christof Fetzer. 2016. Inspector: Data Provenance Using Intel Processor Trace (PT). In Proceedings of the 2016 IEEE 36th International Conference on Distributed Computing Systems. IEEE, 25-34. +[207] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language Models for Dialog Applications. arXiv preprint arXiv:2201.08239 (2022). +[208] ThoughtWorks. 2004. Selenium RC. http://www.seleniumhq.org/projects/remote-control/ + +[209] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971 (2023). +[210] Aqua Tracee. 2022. Runtime eBPF Threat Detection Engine. +[211] Devharsh Trivedi, Aymen Boudguiga, Nesrine Kaaniche, and Nikos Triandopoulos. 2023. SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation. Cryptography 7, 4 (2023), 52. +[212] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Processing Systems 30 (2017). +[213] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017. Graph Attention Networks. stat 1050, 20 (2017), 10-48550. +[214] Arthur Vervaet, Raja Chiky, and Mar Callau-Zori. 2021. USTEP: Unfixed Search Tree for Efficient Log Parsing. In Proceedings of the 2021 IEEE International Conference on Data Mining. IEEE, 659-668. +[215] David Wagner and Paolo Soto. 2002. Mimicry Attacks on Host-Based Intrusion Detection Systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security. 255-264. +[216] Qi Wang, Wajih Ul Hassan, Ding Li, Kangkook Jee, Xiao Yu, Kexuan Zou, Junghwan Rhee, Zhengzhang Chen, Wei Cheng, Carl A Gunter, et al. 2020. You Are What You Do: Hunting Stealthy Malware via Data Provenance Analysis. In Proceedings of the Network and Distributed System Security Symposium. +[217] Rui Wang, Devin Gibson, Kirk Rodrigues, Yu Luo, Yun Zhang, Kaibo Wang, Yupeng Fu, Ting Chen, and Ding Yuan. 2024. $\mu$ Slope: High Compression and Fast Search on Semi-Structured Logs. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation. 529-544. +[218] Ruihua Wang, Yihao Peng, Yilun Sun, Xuancheng Zhang, Hai Wan, and Xibin Zhao. 2023. TeSec: Accurate Server-Side Attack Investigation for Web Applications. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. IEEE, 2799-2816. +[219] Su Wang, Zhiliang Wang, Tao Zhou, Hongbin Sun, Xia Yin, Dongqi Han, Han Zhang, Xingang Shi, and Jiahai Yang. 2022. threaTrace: Detecting and Tracing Host-Based Threats in Node Level Through Provenance Graph Learning. IEEE Transactions on Information Forensics and Security 17 (2022), 3972-3987. +[220] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent Abilities of Large Language Models. arXiv preprint arXiv:2206.07682 (2022). +[221] Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, and Xiaofeng Zou. 2024. HEN: A Novel Hybrid Explainable Neural Network Based Framework for Robust Network Intrusion Detection. Science China Information Sciences 67, 7 (2024), 170304. +[222] Cong Wu, Jianfei Sun, Jing Chen, Mamoun Alazab, Yang Liu, and Yang Xiang. 2025. TCG-IDS: Robust Network Intrusion Detection via Temporal Contrastive Graph Learning. IEEE Transactions on Information Forensics and Security (2025). +[223] Weiheng Wu, Wei Qiao, Teng Li, Yebo Feng, Zhuo Ma, Jianfeng Ma, and Yang Liu. 2025. ProvX: Generating Counterfactual-Driven Attack Explanations for Provenance-Based Detection. arXiv preprint arXiv:2508.06073 (2025). +[224] Yafeng Wu, Yulai Xie, Xuelong Liao, Pan Zhou, Dan Feng, Lin Wu, Xuan Li, Avani Wildani, and Darrell Long. 2022. Paradise: Real-Time, Generalized, and Distributed Provenance-Based Intrusion Detection. IEEE Transactions on Dependable and Secure Computing 20, 2 (2022), 1624-1640. +[225] Yixuan Wu, Long Zhang, Lin Yang, Feng Yang, Linru Ma, Zhoumin Lu, and Wen Jiang. 2025. Intrusion Detection for Internet of Things: An Anchor Graph Clustering Approach. IEEE Transactions on Information Forensics and Security (2025). +[226] Tong Xiao, Zhe Quan, Zhi-Jie Wang, Kaiqi Zhao, Xiangke Liao, Huang Huang, Yunfei Du, and Kenli Li. 2023. LPV: A Log Parsing Framework Based on Vectorization. IEEE Transactions on Network and Service Management 20, 3 (2023), 2711-2725. +[227] Yulai Xie, Dan Feng, Yuchong Hu, Yan Li, Staunton Sample, and Darrell Long. 2018. Pagoda: A Hybrid Approach to Enable Efficient Real-Time Provenance Based Intrusion Detection in Big Data Environments. IEEE Transactions on Dependable and Secure Computing 17, 6 (2018), 1283-1296. +[228] Yulai Xie, Kiran-Kumar Muniswamy-Reddy, Darrell DE Long, Ahmed Amer, Dan Feng, and Zhipeng Tan. 2011. Compressing Provenance Graphs. In Proceedings of the 3rd USENIX Workshop on the Theory and Practice of Provenance. +[229] Junjielong Xu, Qiuai Fu, Zhourui xing Zhu, Yutong Cheng, Zhijing Li, Yuchi Ma, and Pinjia He. 2023. Hue: A User-Adaptive Parser for Hybrid Logs. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 413-424. +[230] Jiacen Xu, Xiaokui Shu, and Zhou Li. 2024. Understanding and Bridging the Gap between Unsupervised Network Representation Learning and Security Analytics. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3590-3608. + +[231] Wei Xu, Ling Huang, Armando Fox, David Patterson, and Michael I Jordan. 2009. Detecting Large-scale System Problems by Mining Console Logs. In Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles. 117-132. +[232] Zhiqiang Xu, Pengcheng Fang, Changlin Liu, Xusheng Xiao, Yu Wen, and Dan Meng. 2022. DepComm: Graph Summarization on System Audit Logs for Attack Investigation. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 540-557. +[233] Zhiwei Xu, Shaohua Qiang, Dinghong Song, Min Zhou, Hai Wan, Xibin Zhao, Ping Luo, and Hongyu Zhang. 2024. DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1-12. +[234] Zhang Xu, Zhenyu Wu, Zhichun Li, Kangkook Jee, Junghwan Rhee, Xusheng Xiao, Fengyuan Xu, Haining Wang, and Guofei Jiang. 2016. High Fidelity Data Reduction for Big Data Security Dexterity Analyses. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 504-516. +[235] Zhiwei Xu, Min Zhou, Xibin Zhao, Yang Chen, Xi Cheng, and Hongyu Zhang. 2023. xASTNN: Improved Code Representations for Industrial Practice. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1727-1738. +[236] Yu Xue, Bernard-marie Onzo, and Ferrante Neri. 2021. Intrusion Detection System Based on an Updated ANN Model. In Advances in Swarm Intelligence: 12th International Conference, ICSI 2021, Qingdao, China, July 17-21, 2021, Proceedings, Part II 12. Springer, 472-479. +[237] Fan Yang, Jiacen Xu, Chunlin Xiong, Zhou Li, and Kehuan Zhang. 2023. ProGrapher: An Anomaly Detection System based on Provenance Graph Embedding. In Proceedings of the 32nd USENIX Security Symposium. 4355-4372. +[238] Lin Yang, Junjie Chen, Zan Wang, Weijing Wang, Jiajun Jiang, Xuyuan Dong, and Wenbin Zhang. 2021. Semi-Supervised Log-Based Anomaly Detection via Probabilistic Label Estimation. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 1448-1460. +[239] Runqing Yang, Shiqing Ma, Haitao Xu, Xiangyu Zhang, and Yan Chen. 2020. UIScope: Accurate, Instrumentation-free, and Visible Attack Investigation for GUI Applications. In Proceedings of the Network and Distributed Systems Security Symposium. +[240] Zhaohui Yang, Wei Xu, Le Liang, Yuanhao Cui, Zhijin Qin, and Mérouane Debbah. 2025. On Privacy, Security, and Trustworthiness in Distributed Wireless Large AI Models. Science China Information Sciences 68, 7 (2025), 1-15. +[241] Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Yu-Yang Liu, and Li Yuan. 2023. LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. arXiv preprint arXiv:2310.01469 (2023). +[242] Kundi Yao, Heng Li, Weiyi Shang, and Ahmed E Hassan. 2020. A Study of the Performance of General Compressors on Log Files. Empirical Software Engineering 25 (2020), 3043-3085. +[243] Kundi Yao, Mohammed Sayagh, Weiyi Shang, and Ahmed E Hassan. 2021. Improving State-of-the-Art Compression Techniques for Log Management Tools. IEEE Transactions on Software Engineering 48, 8 (2021), 2748-2760. +[244] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. *High-Confidence Computing* (2024), 100211. +[245] Heng Yin, Dawn Song, Manuel Egele, Christopher Kruegel, and Engin Kirda. 2007. Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 116-127. +[246] Kun Yin, Meng Yan, Ling Xu, Zhou Xu, Zhao Li, Dan Yang, and Xiaohong Zhang. 2020. Improving Log-Based Anomaly Detection with Component-Aware Analysis. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution. IEEE, 667-671. +[247] Guangba Yu, Pengfei Chen, Pairui Li, Tianjun Weng, Haibing Zheng, Yuetang Deng, and Zibin Zheng. 2023. LogReducer: Identify and Reduce Log Hotspots in Kernel on the Fly. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 1763-1775. +[248] Le Yu, Shiqing Ma, Zhuo Zhang, Guanhong Tao, Xiangyu Zhang, Dongyan Xu, Vincent E Urias, Han Wei Lin, Gabriela F Ciocarlie, Vinod Yegneswaran, et al. 2021. ALchemist: Fusing Application and Audit Logs for Precise Attack Provenance without Instrumentation. In Proceedings of the Network and Distributed System Security Symposium. +[249] Siyu Yu, Yifan Wu, Ying Li, and Pinjia He. 2024. Unlocking the Power of Numbers: Log Compression via Numeric Token Parsing. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 919-930. +[250] Jun Zengy, Xiang Wang, Jiahao Liu, Yinfang Chen, Zhenkai Liang, Tat-Seng Chua, and Zheng Leong Chua. 2022. ShadeWatcher: Recommendation-Guided Cyber Threat Analysis Using System Audit Records. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 489-506. +[251] Chao Zha, Zhiyu Wang, Yifei Fan, Bing Bai, Yinjie Zhang, Sainan Shi, and Ruyun Zhang. 2025. A-NIDS: Adaptive Network Intrusion Detection System based on Clustering and Stacked CTGAN. IEEE Transactions on Information Forensics and Security (2025). + +[252] Bo Zhang, Yansong Gao, Changlong Yu, Boyu Kuang, Zhi Zhang, Hyoungshick Kim, and Anmin Fu. 2025. TAPAS: An Efficient Online APT Detection with Task-guided Process Provenance Graph Segmentation and Analysis. In Proceedings of the USENIX Security Symposium. 607-624. +[253] Pei Zhang, Fangzhou He, Han Zhang, Jiankun Hu, Xiaohong Huang, Jilong Wang, Xia Yin, Huahong Zhu, and Yahui Li. 2023. Real-Time Malicious Traffic Detection with Online Isolation Forest over SD-WAN. IEEE Transactions on Information Forensics and Security 18 (2023), 2076-2090. +[254] Shenglin Zhang, Yuhe Ji, Jiaqi Luan, Xiaohui Nie, Ziang Chen, Minghua Ma, Yongqian Sun, and Dan Pei. 2024. End-to-End Automl for Unsupervised Log Anomaly Detection. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 1680–1692. +[255] Tianzhu Zhang, Han Qiu, Gabriele Castellano, Myriana Rifai, Chung Shue Chen, and Fabio Pianese. 2023. System Log Parsing: A Survey. IEEE Transactions on Knowledge and Data Engineering 35, 8 (2023), 8596-8614. https://doi.org/10.1109/TKDE.2022.3222417 +[256] Tianye Zhang, Xumeng Wang, Zongzhuang Li, Fangzhou Guo, Yuxin Ma, and Wei Chen. 2017. A Survey of Network Anomaly Visualization. Science China Information Sciences 60, 12 (2017), 121101. +[257] Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, et al. 2019. Robust Log-Based Anomaly Detection on Unstable Log Data. In Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. 807-817. +[258] Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. 2024. Revolutionizing Finance with LLMs: An Overview of Applications and Insights. arXiv preprint arXiv:2401.11641 (2024). +[259] Jianjin Zhao, Qi Li, Zewei Han, Junsong Fu, Guoshun Nan, Meng Shen, and Bharat K Bhargava. 2024. ReTrial: Robust Encrypted Malicious Traffic Detection via Discriminative Relation Incorporation and Misleading Relation Correction. IEEE Transactions on Information Forensics and Security (2024). +[260] Ruijie Zhao, Xianwen Deng, Zhicong Yan, Jun Ma, Zhi Xue, and Yijun Wang. 2022. MT-FlowFormer: A Semi-Supervised Flow Transformer for Encrypted Traffic Classification. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2576-2584. +[261] Ying Zhao, FangFang Zhou, XiaoPing Fan, Xing Liang, and YongGang Liu. 2013. IDSRadar: A Real-Time Visualization Framework for IDS Alerts. Science China Information Sciences 56, 8 (2013), 1-12. +[262] Ziming Zhao, Zhaoxuan Li, Jialun Jiang, Fengyuan Yu, Fan Zhang, Congyuan Xu, Xinjie Zhao, Rui Zhang, and Shize Guo. 2022. ERNN: Error-Resilient RNN for Encrypted Traffic Detection Towards Network-Induced Phenomena. IEEE Transactions on Dependable and Secure Computing (2022). +[263] Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhuoxue Song, and Zhaoxuan Li. 2024. Effective DDoS Mitigation via ML-Driven In-Network Traffic Shaping. IEEE Transactions on Dependable and Secure Computing 21, 4 (2024), 4271-4289. +[264] Ying Zhong, Zhiliang Wang, Xingang Shi, Jiahai Yang, and Keqin Li. 2024. RFG-HELAD: A Robust Fine-Grained Network Traffic Anomaly Detection Model Based on Heterogeneous Ensemble Learning. IEEE Transactions on Information Forensics and Security (2024). +[265] Junwei Zhou, Shaowen Ying, Shulan Wang, Dongdong Zhao, Jianwen Xiang, Kaitai Liang, and Peng Liu. 2025. LogDLR: Unsupervised Cross-System Log Anomaly Detection Through Domain-Invariant Latent Representation. IEEE Transactions on Dependable and Secure Computing (2025). +[266] Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R Lyu. 2023. Loghub: A Large Collection of System Log Datasets for AI-Driven Log Analytics. In Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering. IEEE, 355-366. +[267] Tiantian Zhu, Jiayu Wang, Linqi Ruan, Chunlin Xiong, Jinkai Yu, Yaosheng Li, Yan Chen, Mingqi Lv, and Tieming Chen. 2021. General, Efficient, and Real-Time Data Compaction Strategy for APT Forensic Analysis. IEEE Transactions on Information Forensics and Security 16 (2021), 3312-3325. +[268] Tiantian Zhu, Jinkai Yu, Chunlin Xiong, Wenrui Cheng, Qixuan Yuan, Jie Ying, Tieming Chen, Jiabo Zhang, Mingqi Lv, Yan Chen, et al. 2023. APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. IEEE Transactions on Dependable and Secure Computing 20, 6 (2023), 5247-5264. +[269] Yao Zhu, LI Zhenyuan, Yangyang Wei, and Shouling Ji. 2025. The Case for Learned Provenance-based System Behavior Baseline. In Forty-second International Conference on Machine Learning. +[270] Michael Zipperle, Florian Gottwalt, Elizabeth Chang, and Tharam S. Dillon. 2022. Provenance-based Intrusion Detection Systems: A Survey. ACM Computing Surveys 55 (2022), 1 - 36. https://api-semanticscholar.org/CorpusID:249579087 \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07839/images/0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg b/data/2025/2504_07xxx/2504.07839/images/0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg new file mode 100644 index 0000000000000000000000000000000000000000..576b2379c00a6f0f80e34c58eba0b8abe6c914a5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1e02360b721e54abf0112aa97e658bb5e6f17e09c72dffb38ec2b4de4d9b7ae2 +size 24894 diff --git a/data/2025/2504_07xxx/2504.07839/images/12bf721c67f5ee3bf4f30ea97ecba8aaa579e91d4838ce00894cb8540fa17426.jpg b/data/2025/2504_07xxx/2504.07839/images/12bf721c67f5ee3bf4f30ea97ecba8aaa579e91d4838ce00894cb8540fa17426.jpg new file mode 100644 index 0000000000000000000000000000000000000000..85855b212dd6bc6a838aa14a67276f6176a4cd24 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/12bf721c67f5ee3bf4f30ea97ecba8aaa579e91d4838ce00894cb8540fa17426.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0841b2a95deb7dcc5079f56792bb6b2eb342e5d543120100a498cd40581df16 +size 259116 diff --git a/data/2025/2504_07xxx/2504.07839/images/2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg b/data/2025/2504_07xxx/2504.07839/images/2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2fd30f41bd987b7805e158ada7e1bdc1802abcd4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb36abbac44f6a826a6acdd49cf13bba681ca5ae3a46f70f9a6c32b4bb355da6 +size 80657 diff --git a/data/2025/2504_07xxx/2504.07839/images/38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg b/data/2025/2504_07xxx/2504.07839/images/38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg new file mode 100644 index 0000000000000000000000000000000000000000..05f46c8c6f0c8b9a03dd9cdf4c83e28a1eca57e0 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d8c98c98f33af8f2db592cbc248e4875d980b57408784b2a0f0dc1f1c551bd9 +size 28574 diff --git a/data/2025/2504_07xxx/2504.07839/images/51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg b/data/2025/2504_07xxx/2504.07839/images/51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ea627da96af758f865452a1cc5ed6b5a897d50f8 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a12d2e5ff9baf4ac0d8457a7dc607b287a77633ad40cf0f91335935238a3fbab +size 39593 diff --git a/data/2025/2504_07xxx/2504.07839/images/56aa40c700210b6d12b351836313889a9aa1ee9637de6412e6be03e25f4a6f0e.jpg b/data/2025/2504_07xxx/2504.07839/images/56aa40c700210b6d12b351836313889a9aa1ee9637de6412e6be03e25f4a6f0e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9a7f48ab4636ed78fbf99c443536886962b5e48 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/56aa40c700210b6d12b351836313889a9aa1ee9637de6412e6be03e25f4a6f0e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d093102af4498dfcc60d569b5bfe166c7af4f54126590cefe6d83641fbfa509 +size 42477 diff --git a/data/2025/2504_07xxx/2504.07839/images/56e153ae6819bf89b12e886fd61914b1384a3f85184e624d1b7af714ffa21642.jpg b/data/2025/2504_07xxx/2504.07839/images/56e153ae6819bf89b12e886fd61914b1384a3f85184e624d1b7af714ffa21642.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7d8435585a8ee8beeb5956ee89cad5273feb725a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/56e153ae6819bf89b12e886fd61914b1384a3f85184e624d1b7af714ffa21642.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ba5263cbe04585d6f4eb643c8ab30ca867a9a1bf900c3baf6054050209131b0 +size 76681 diff --git a/data/2025/2504_07xxx/2504.07839/images/7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg b/data/2025/2504_07xxx/2504.07839/images/7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c6735a2e51f079ba74eb0855dd0a6c8737804935 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2f9c61930801eb6d60b83c7226a3dc242ed64c4a9f8d07e8b72f8c3b71fa9ec +size 105841 diff --git a/data/2025/2504_07xxx/2504.07839/images/9040e5a3f950e1068d51ac6479bef0d55230b78324d6f4f477c0fa04f8c2b271.jpg b/data/2025/2504_07xxx/2504.07839/images/9040e5a3f950e1068d51ac6479bef0d55230b78324d6f4f477c0fa04f8c2b271.jpg new file mode 100644 index 0000000000000000000000000000000000000000..449a9251c80c8c082986ff5baf52755da71b7e82 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/9040e5a3f950e1068d51ac6479bef0d55230b78324d6f4f477c0fa04f8c2b271.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:479933adf389ca4ba526d945bed5c764f03769da788509ebc427631a93829964 +size 29708 diff --git a/data/2025/2504_07xxx/2504.07839/images/c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg b/data/2025/2504_07xxx/2504.07839/images/c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7eb2f851738b3d458e1f868c5ed156c9681364c7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b9307190892bf896bf4344ffd116e0d573ca7d0237ae9c07974ae1d4658038c +size 24870 diff --git a/data/2025/2504_07xxx/2504.07839/images/d77f3601530f6283f668e7c2a7916f80f8b6049a2d4b3f3fdea4dbac64ee1bf2.jpg b/data/2025/2504_07xxx/2504.07839/images/d77f3601530f6283f668e7c2a7916f80f8b6049a2d4b3f3fdea4dbac64ee1bf2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1a71b875a9094ef1cbb9cc8a5e96fefcebaeafdf --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/d77f3601530f6283f668e7c2a7916f80f8b6049a2d4b3f3fdea4dbac64ee1bf2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21c92fa329bb3944b992366d6e230ab4c3f1da057f92295a9d48a5f0a35b458d +size 107859 diff --git a/data/2025/2504_07xxx/2504.07839/images/dcec85fafc03c55d917b54a234d99e02c0338d0d6ed1ae0535780c1185341cbd.jpg b/data/2025/2504_07xxx/2504.07839/images/dcec85fafc03c55d917b54a234d99e02c0338d0d6ed1ae0535780c1185341cbd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..83275b731d13ad42790c137da3bfe3dac7bd8a76 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/dcec85fafc03c55d917b54a234d99e02c0338d0d6ed1ae0535780c1185341cbd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:98c2376ddbc6dd6582012985237d948921b8c3552c1bbce05de2c9d8ad93835c +size 76149 diff --git a/data/2025/2504_07xxx/2504.07839/images/e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg b/data/2025/2504_07xxx/2504.07839/images/e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0ce659e5c7a48f8c993a30500651ab16ec468e7c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ad5b1aaea703b92473643f65c502c0da8eeaeb9ad3c2acde7970da828de1de7 +size 30026 diff --git a/data/2025/2504_07xxx/2504.07839/images/fe18d2ce14f1f4df61f7c7755a7441ee7d26f7ed02d5dc381f022683391478c8.jpg b/data/2025/2504_07xxx/2504.07839/images/fe18d2ce14f1f4df61f7c7755a7441ee7d26f7ed02d5dc381f022683391478c8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..486d6b8dd7dbba6636fc82f1da6ce96eb5814870 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/images/fe18d2ce14f1f4df61f7c7755a7441ee7d26f7ed02d5dc381f022683391478c8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7486f4e282edb9cdf7233be5a3ea4b17d91616dacfbaf896bbbd2c6d0469ed74 +size 69073 diff --git a/data/2025/2504_07xxx/2504.07839/layout.json b/data/2025/2504_07xxx/2504.07839/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c352df2a033c09f8a89e90c2a556710e5f617497 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07839/layout.json @@ -0,0 +1,21908 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 43, + 82, + 439, + 98 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 82, + 439, + 98 + ], + "spans": [ + { + "bbox": [ + 43, + 82, + 439, + 98 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 108, + 441, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 108, + 441, + 133 + ], + "spans": [ + { + "bbox": [ + 42, + 108, + 441, + 133 + ], + "type": "text", + "content": "ZHIWEI XU, YUJUAN WU, SHIHENG WANG, JIABAO GAO, TIAN QIU, ZIQI WANG, HAI WAN, and XIBIN ZHAO*, KLISS, BNRist, School of Software, Tsinghua University, China" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 139, + 442, + 249 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 139, + 442, + 249 + ], + "spans": [ + { + "bbox": [ + 42, + 139, + 442, + 249 + ], + "type": "text", + "content": "Intrusion Detection Systems (IDS) have long been a hot topic in the cybersecurity community. In recent years, with the introduction of deep learning (DL) techniques, IDS have made great progress due to their increasing generalizability. The rationale behind this is that by learning the underlying patterns of known system behaviors, IDS detection can be generalized to intrusions that exploit zero-day vulnerabilities. In this survey, we refer to this type of IDS as DL-based IDS (DL-IDS). From the perspective of DL, this survey systematically reviews all the stages of DL-IDS, including data collection, log storage, log parsing, graph summarization, attack detection, and attack investigation. To accommodate current researchers, a section describing the publicly available benchmark datasets is included. This survey further discusses current challenges and potential future research directions, aiming to help researchers understand the basic ideas and visions of DL-IDS research, as well as to motivate their research interests." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "spans": [ + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": "CCS Concepts: " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " Security and privacy " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " Intrusion detection systems; " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " Computing methodologies " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " Machine learning; " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\cdot" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " General and reference " + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 42, + 254, + 441, + 277 + ], + "type": "text", + "content": " Surveys and overviews." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 280, + 361, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 280, + 361, + 292 + ], + "spans": [ + { + "bbox": [ + 42, + 280, + 361, + 292 + ], + "type": "text", + "content": "Additional Key Words and Phrases: Intrusion detection systems, deep learning, survey" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 43, + 295, + 144, + 305 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 295, + 144, + 305 + ], + "spans": [ + { + "bbox": [ + 43, + 295, + 144, + 305 + ], + "type": "text", + "content": "ACM Reference Format:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 306, + 442, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 306, + 442, + 329 + ], + "spans": [ + { + "bbox": [ + 42, + 306, + 442, + 329 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao. 2025. Deep Learning-based Intrusion Detection Systems: A Survey. J. ACM 1, 1, Article 1 (October 2025), 38 pages." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 43, + 337, + 139, + 348 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 337, + 139, + 348 + ], + "spans": [ + { + "bbox": [ + 43, + 337, + 139, + 348 + ], + "type": "text", + "content": "1 INTRODUCTION" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 352, + 442, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 352, + 442, + 424 + ], + "spans": [ + { + "bbox": [ + 42, + 352, + 442, + 424 + ], + "type": "text", + "content": "The promising Internet of Everything connects people, processes, data, and things through the Internet [51], bringing convenience and efficiency to the world. Yet its inevitable security vulnerabilities could be exploited by deliberate attackers. With increasingly sophisticated attack methods such as Advanced Persistent Threat (APT), the attackers are in a threatening position to sabotage network systems or steal sensitive data. The detection of intrusions, particularly based on DL, has consequently been a prominent topic in the cybersecurity community." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 42, + 424, + 442, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 424, + 442, + 520 + ], + "spans": [ + { + "bbox": [ + 42, + 424, + 442, + 520 + ], + "type": "text", + "content": "The automated system for detecting intrusions is known as IDS. The limitations of IDS may result in terrible damage to enterprises. One example is the recent Colonial Pipeline Ransomware Attack [16]. In April 2021, the hacking group DarkSide launched a ransomware attack on Colonial Pipeline, the biggest oil pipeline company in the United States, using an unused VPN account. Due to this attack, 5,500 miles of transportation pipelines were forced to shut down, affecting nearly " + }, + { + "bbox": [ + 42, + 424, + 442, + 520 + ], + "type": "inline_equation", + "content": "45\\%" + }, + { + "bbox": [ + 42, + 424, + 442, + 520 + ], + "type": "text", + "content": " of the fuel supply on the Eastern Coast. The Colonial Pipeline paid $4.4 million ransom money, in addition to the theft of over 100 GB of data. If the malware intrusion can be detected in time, the influence of this attack can be greatly mitigated or even eliminated." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 43, + 529, + 273, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 529, + 273, + 541 + ], + "spans": [ + { + "bbox": [ + 43, + 529, + 273, + 541 + ], + "type": "text", + "content": "1.1 Tough but Bright Intrusion Detection System" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 544, + 441, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 544, + 441, + 569 + ], + "spans": [ + { + "bbox": [ + 42, + 544, + 441, + 569 + ], + "type": "text", + "content": "IDS have been increasingly challenged to effectively deal with intrusions for decades. It is noted in Figure 1(a) that the number of " + }, + { + "bbox": [ + 42, + 544, + 441, + 569 + ], + "type": "inline_equation", + "content": "\\mathrm{CVE}^1" + }, + { + "bbox": [ + 42, + 544, + 441, + 569 + ], + "type": "text", + "content": " records has presented an accelerating uptrend, especially" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 15, + 184, + 35, + 533 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 15, + 184, + 35, + 533 + ], + "spans": [ + { + "bbox": [ + 15, + 184, + 35, + 533 + ], + "type": "text", + "content": "arXiv:2504.07839v3 [cs.CR] 13 Oct 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 42, + 574, + 179, + 584 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 574, + 179, + 584 + ], + "spans": [ + { + "bbox": [ + 42, + 574, + 179, + 584 + ], + "type": "text", + "content": "*Xibin Zhao is the corresponding author." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 42, + 584, + 440, + 604 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 584, + 440, + 604 + ], + "spans": [ + { + "bbox": [ + 42, + 584, + 440, + 604 + ], + "type": "text", + "content": "1Common Vulnerabilities and Exposures (CVE) is a security project for security information sharing and vulnerability management. CVE is a publicly accessible database where each vulnerability has a common name and a unique identifier." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 42, + 611, + 441, + 631 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 611, + 441, + 631 + ], + "spans": [ + { + "bbox": [ + 42, + 611, + 441, + 631 + ], + "type": "text", + "content": "Authors' address: Zhiwei Xu; Yujuan Wu; Shiheng Wang; Jiabao Gao; Tian Qiu; Ziqi Wang; Hai Wan; Xibin Zhao, KLISS, BNRist, School of Software, Tsinghua University, Beijing, China, zxb@tsinghua.edu.cn." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 43, + 638, + 164, + 648 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 638, + 164, + 648 + ], + "spans": [ + { + "bbox": [ + 43, + 638, + 164, + 648 + ], + "type": "text", + "content": "2025.ACM 0004-5411/2025/10-ART1" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 43, + 648, + 171, + 658 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 648, + 171, + 658 + ], + "spans": [ + { + "bbox": [ + 43, + 648, + 171, + 658 + ], + "type": "text", + "content": "https://doi.org/XXXXXXXXXXXXXXXXXX" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 47, + 101, + 236, + 214 + ], + "blocks": [ + { + "bbox": [ + 47, + 101, + 236, + 214 + ], + "lines": [ + { + "bbox": [ + 47, + 101, + 236, + 214 + ], + "spans": [ + { + "bbox": [ + 47, + 101, + 236, + 214 + ], + "type": "image", + "image_path": "0a3a671b99c38b8ebc4032a6fcaa55adab684f519233bcac83c7d147cbdd5f40.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 79, + 223, + 216, + 234 + ], + "lines": [ + { + "bbox": [ + 79, + 223, + 216, + 234 + ], + "spans": [ + { + "bbox": [ + 79, + 223, + 216, + 234 + ], + "type": "text", + "content": "(a) Trend of CVE records and IDS papers." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 183, + 249, + 300, + 260 + ], + "lines": [ + { + "bbox": [ + 183, + 249, + 300, + 260 + ], + "spans": [ + { + "bbox": [ + 183, + 249, + 300, + 260 + ], + "type": "text", + "content": "Fig. 1. Recent situation of IDS." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 260, + 94, + 433, + 208 + ], + "blocks": [ + { + "bbox": [ + 260, + 94, + 433, + 208 + ], + "lines": [ + { + "bbox": [ + 260, + 94, + 433, + 208 + ], + "spans": [ + { + "bbox": [ + 260, + 94, + 433, + 208 + ], + "type": "image", + "image_path": "38a51c03212e981f824eb90d45503951c547858345408e45db6fc22f829de565.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 279, + 223, + 411, + 234 + ], + "lines": [ + { + "bbox": [ + 279, + 223, + 411, + 234 + ], + "spans": [ + { + "bbox": [ + 279, + 223, + 411, + 234 + ], + "type": "text", + "content": "(b) Category of CNNVD vulnerabilities." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "spans": [ + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "text", + "content": "in 2016, which suffered a sharp rise. After 2016, the number of CVE records stays growing at a high speed, reaching around 30,000 in 2024. Besides, according to the " + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "inline_equation", + "content": "\\mathrm{CNNVD}^2" + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "text", + "content": " report shown in Figure 1(b), we can observe that almost all (i.e., " + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "inline_equation", + "content": "97.2\\%" + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "text", + "content": " ) vulnerabilities are medium risk or above, with high and critical risk accounting for " + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "inline_equation", + "content": "40\\%" + }, + { + "bbox": [ + 42, + 278, + 441, + 337 + ], + "type": "text", + "content": " of them. The growing number of vulnerabilities and the large percentage of high-risk vulnerabilities both reveal the tough situation faced by IDS." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "spans": [ + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "text", + "content": "Nevertheless, an interesting observation from Figure 1(a) is that, against the number of CVE records, DL-IDS papers also started to emerge in 2016 and their amount grew year by year subsequently. We can notably find that the growth trend of DL-IDS papers is nearly the same as that of CVE records. The potential reason can be speculated as DL is an effective way for IDS to cope with their tough situation. Borrowing the strong generalizability from DL techniques, DL-IDS detection can be extended to zero-day intrusions that are almost impossible to detect with the traditional DL-IDS. Some studies [219, 237, 250] demonstrate this speculation. In their experiments, DL-IDS are all reported with an achievement of over " + }, + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "text", + "content": " detection accuracy while the traditional DL-IDS sometimes only have around " + }, + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 42, + 338, + 442, + 445 + ], + "type": "text", + "content": " detection accuracy." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "spans": [ + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "text", + "content": "The IDS future is not only tough but also bright with the aid of DL - it is evident that the growth in the number of IDS papers primarily comes from those based on DL techniques. The proportion of DL-IDS papers rises from about " + }, + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "inline_equation", + "content": "0\\%" + }, + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "text", + "content": " in 2016 to a very high " + }, + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "inline_equation", + "content": "65.7\\%" + }, + { + "bbox": [ + 42, + 445, + 441, + 519 + ], + "type": "text", + "content": " in 2024. This phenomenon reflects the great interests and visions of the cybersecurity community in DL-IDS. To date, the DL-IDS development has almost reached a decade, and thus, it is time, and also essential, to revisit how DL and IDS interact, identify emerging trends, and guide future research directions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 43, + 527, + 210, + 538 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 527, + 210, + 538 + ], + "spans": [ + { + "bbox": [ + 43, + 527, + 210, + 538 + ], + "type": "text", + "content": "1.2 Related Surveys and Our Scope" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 42, + 541, + 442, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 541, + 442, + 602 + ], + "spans": [ + { + "bbox": [ + 42, + 541, + 442, + 602 + ], + "type": "text", + "content": "Unfortunately, none of the related surveys in the last decade have systematically investigated DL-IDS. On one hand, some related surveys may only focus on a few parts of DL-IDS, such as log parsers [138, 188, 255], datasets [201], attack modeling [10, 201], and specific DL technique type [17]. On the other hand, while several surveys [21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270] involve some DL-based approaches, they did not review DL-IDS from the perspective of DL particularly." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 607, + 441, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 607, + 441, + 632 + ], + "spans": [ + { + "bbox": [ + 42, + 607, + 441, + 632 + ], + "type": "text", + "content": "Partial Investigation for DL-IDS. The surveys [10, 138, 188, 201, 255] are the typical example papers describing only a few parts of DL-IDS. Among them, Adel et al. [10] mainly studied various" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "type": "text", + "content": "1:2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 638, + 441, + 659 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 638, + 441, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 638, + 441, + 659 + ], + "type": "text", + "content": "2Chinese National Vulnerability Database (CNNVD) is a Chinese national database that catalogs security vulnerabilities in software and hardware products. CNNVD also provides unique identifiers and descriptions similar to CVE." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 84, + 442, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 84, + 442, + 182 + ], + "spans": [ + { + "bbox": [ + 42, + 84, + 442, + 182 + ], + "type": "text", + "content": "techniques and solutions that were tailored to APT attacks, as well as discussed where to make the APT detection framework smart. Scott et al. [138] and Tejaswini et al. [188] dually discussed online log parsers and their applications for anomaly detection. Branka et al. [201] review APT datasets and their creation, along with feature engineering in attack modeling. Zhang et al. [255] created an exhaustive taxonomy of system log parsers and empirically analyzed the critical performance and operational features of 17 open-source log parsers. Tristan et al. [17] focused on the applications of graph neural networks (GNNs) to IDS. For DL-IDS, all the above surveys are obviously insufficient to advance research understanding and provide theoretical suggestions." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 187, + 442, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 187, + 442, + 283 + ], + "spans": [ + { + "bbox": [ + 42, + 187, + 442, + 283 + ], + "type": "text", + "content": "Different Perspectives from DL-IDS. Another type of existing surveys involved DL-IDS but studied them from the other perspectives [4, 21, 83, 96, 105, 127, 128, 140, 150, 162, 163, 270]. Specifically, the surveys [105, 128] aim to give an elaborate image of IDS and comprehensively explain methods from signature checking to anomaly detection algorithms. Originating from log data, the survey [83] presented a detailed overview of automated log analysis for reliability engineering and introduced three tasks including anomaly detection, failure prediction, and failure diagnosis. In survey [162], Nasir et al. explored the efficacy of swarm intelligence on IDS and highlighted the corresponding challenges in multi-objective IDS problems." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 283, + 442, + 378 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 283, + 442, + 378 + ], + "spans": [ + { + "bbox": [ + 42, + 283, + 442, + 378 + ], + "type": "text", + "content": "Additionally, data types inspire and contribute significantly to the related surveys, whose categories include host-based IDS (HIDS) [21, 127, 140, 150, 270] and network-based IDS (NIDS) [4, 163]. Bridges et al. [21] focused on IDS leveraging host data for the enterprise network. Martins et al. [150] brought the HIDS concept to the Internet of Things. As a representative form of data in HIDS, the provenance graph [127, 140, 270] and its reduction techniques [96] were also extensively studied in survey literature. In NIDS, Nassar et al. [163] studied the techniques of network intrusion detection, especially those with machine learning (ML). Ahmad et al. [4] further incorporated ML and DL into their NIDS survey and studied the downstream learning methods duallyedly." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 379, + 441, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 379, + 441, + 403 + ], + "spans": [ + { + "bbox": [ + 42, + 379, + 441, + 403 + ], + "type": "text", + "content": "The above surveys, however, lack investigation and discussion about DL-IDS. DL techniques are only what they cover or involve, rather than the primary focus of their research." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 409, + 442, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 409, + 442, + 470 + ], + "spans": [ + { + "bbox": [ + 42, + 409, + 442, + 470 + ], + "type": "text", + "content": "Scope of Our Survey. Our work distinguishes the related surveys by providing a comprehensive literature review of DL-IDS. From the perspective of DL, our survey elaborates on a common workflow of DL-IDS and introduces the corresponding taxonomies of all modules within this workflow. Moreover, our survey discusses the possible challenges and research visions for DL-IDS, which include many DL-related issues that have not yet been studied by the existing surveys." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 43, + 479, + 214, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 479, + 214, + 491 + ], + "spans": [ + { + "bbox": [ + 43, + 479, + 214, + 491 + ], + "type": "text", + "content": "1.3 Contributions and Organization" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 494, + 289, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 494, + 289, + 506 + ], + "spans": [ + { + "bbox": [ + 42, + 494, + 289, + 506 + ], + "type": "text", + "content": "In summary, this survey makes the following contributions:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 58, + 509, + 439, + 614 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 58, + 509, + 439, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 509, + 439, + 542 + ], + "spans": [ + { + "bbox": [ + 58, + 509, + 439, + 542 + ], + "type": "text", + "content": "- Realizing that IDS has made significant progress with the aid of DL over the last decade, we present a thorough survey for DL-IDS, formalizing its definition and clarifying its location among other types of IDS." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 58, + 545, + 439, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 545, + 439, + 578 + ], + "spans": [ + { + "bbox": [ + 58, + 545, + 439, + 578 + ], + "type": "text", + "content": "- We outline the common workflow for DL-IDS, consisting of the data management stage and intrusion detection stage. We further systematically illustrate the research advances in all modules of this workflow and innovatively taxonomize the papers based on DL techniques" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 58, + 581, + 439, + 614 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 581, + 439, + 614 + ], + "spans": [ + { + "bbox": [ + 58, + 581, + 439, + 614 + ], + "type": "text", + "content": "- From the perspective of DL, we discuss the potential challenges and future directions for DL-IDS, especially highlighting the ones unique to DL-IDS for accommodating current researchers." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "type": "text", + "content": "Survey Structure. Section 2 introduces the survey methodology of this work. Section 3 describes the background knowledge about DL-IDS. Section 4 and Section 5 elaborate the recent research trends on data management stage and intrusion detection stage, respectively. Section 6 illustrates" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "type": "text", + "content": "1:3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 90, + 286, + 203 + ], + "blocks": [ + { + "bbox": [ + 56, + 90, + 286, + 203 + ], + "lines": [ + { + "bbox": [ + 56, + 90, + 286, + 203 + ], + "spans": [ + { + "bbox": [ + 56, + 90, + 286, + 203 + ], + "type": "image", + "image_path": "51f508f9743f58eee7775f97202b0c04cec2698458e605ca57003fe41af027ad.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 95, + 210, + 245, + 221 + ], + "lines": [ + { + "bbox": [ + 95, + 210, + 245, + 221 + ], + "spans": [ + { + "bbox": [ + 95, + 210, + 245, + 221 + ], + "type": "text", + "content": "Fig. 2. Source distribution of references." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 303, + 94, + 430, + 195 + ], + "blocks": [ + { + "bbox": [ + 303, + 94, + 430, + 195 + ], + "lines": [ + { + "bbox": [ + 303, + 94, + 430, + 195 + ], + "spans": [ + { + "bbox": [ + 303, + 94, + 430, + 195 + ], + "type": "image", + "image_path": "c488b92b5c3650228849285903411373eee7c627918235cebb15b24e5f35b476.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 327, + 210, + 404, + 221 + ], + "lines": [ + { + "bbox": [ + 327, + 210, + 404, + 221 + ], + "spans": [ + { + "bbox": [ + 327, + 210, + 404, + 221 + ], + "type": "text", + "content": "Fig. 3. Types of IDS." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 244, + 441, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 244, + 441, + 269 + ], + "spans": [ + { + "bbox": [ + 42, + 244, + 441, + 269 + ], + "type": "text", + "content": "the benchmark datasets and their feature dimensions. Section 7 discusses the visions and challenges for future research. Lastly, the conclusion is presented in Section 8." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 43, + 279, + 183, + 290 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 279, + 183, + 290 + ], + "spans": [ + { + "bbox": [ + 43, + 279, + 183, + 290 + ], + "type": "text", + "content": "2 SURVEY METHODOLOGY" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 294, + 442, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 294, + 442, + 425 + ], + "spans": [ + { + "bbox": [ + 42, + 294, + 442, + 425 + ], + "type": "text", + "content": "To start our literature review, we selected several popular literature databases, including Web of Science [12], IEEE Xplore [95], and Scopus [50], as the search engine. For search keywords, we determined from generalized terms associated with DL-IDS, such as intrusion detection system, attack investigation, anomaly detection, threat detection, Advanced Persistent Threats, data provenance analysis, forensic analysis, causality analysis, log collection, log compression, log parsing, log storage, and log summarization. Then, we employed Connected Papers [168], a visual tool that assists researchers in finding relevant academic papers, to ensure that we did not overlook the typical related literature. Since the found literature is numerous and rather generalized for the DL-IDS scope, we carefully checked their topics and prioritized only academic papers that are highly related. Finally, all these papers were filtered based on the impact factors of their published journals or academic conferences, leaving us a total of 131 papers." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 426, + 442, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 426, + 442, + 486 + ], + "spans": [ + { + "bbox": [ + 42, + 426, + 442, + 486 + ], + "type": "text", + "content": "We identified a few venues that have published many significant papers in the field of DL-IDS, such as Usenix Security, S&P, CCS, NDSS, TIFS, TDSC, ICSE, ASE, ESEC/FSE, TSE, OSDI, NSDI, EuroSys, SOSP, ATC, ICML, KDD, WWW, TKDE, ICDE, and SCIS. We broadly divide them into five categories: security, software, system, data, and interdisciplinary. The distribution of these papers with their published years is reported in Figure 2." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 43, + 496, + 133, + 507 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 496, + 133, + 507 + ], + "spans": [ + { + "bbox": [ + 43, + 496, + 133, + 507 + ], + "type": "text", + "content": "3 BACKGROUND" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 43, + 511, + 193, + 523 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 511, + 193, + 523 + ], + "spans": [ + { + "bbox": [ + 43, + 511, + 193, + 523 + ], + "type": "text", + "content": "3.1 Intrusion Detection System" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 526, + 441, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 526, + 441, + 563 + ], + "spans": [ + { + "bbox": [ + 42, + 526, + 441, + 563 + ], + "type": "text", + "content": "3.1.1 Definition of IDS. IDS have long been a central issue in the cybersecurity community, whose research can be traced back to the 1990s [181] or even earlier. According to the existing literature [64, 128, 162, 163, 181, 236], IDS can be defined progressively as follows:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 570, + 441, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 570, + 441, + 594 + ], + "spans": [ + { + "bbox": [ + 42, + 570, + 441, + 594 + ], + "type": "text", + "content": "Definition 3.1. (Intrusion Detection System). Intrusion detection system is a software or hardware system to automate the process of intrusion detection." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 42, + 602, + 441, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 602, + 441, + 626 + ], + "spans": [ + { + "bbox": [ + 42, + 602, + 441, + 626 + ], + "type": "text", + "content": "Definition 3.2. (Intrusion Detection). Intrusion detection is the process of monitoring and analyzing the events occurring in a computer or a network for signs of intrusions." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 42, + 634, + 441, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 634, + 441, + 658 + ], + "spans": [ + { + "bbox": [ + 42, + 634, + 441, + 658 + ], + "type": "text", + "content": "Definition 3.3. (Intrusion). Intrusion is the attempt to undermine the confidentiality, integrity, and availability of a computer or a network, or to circumvent its security facilities." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 55, + 69 + ], + "type": "text", + "content": "1:4" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 70 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 440, + 121 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 440, + 121 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 440, + 121 + ], + "type": "text", + "content": "3.1.2 Types of IDS. Generally, IDS can be further categorized into various types based on their data sources [270]. Well-known types include NIDS, HIDS, and Provenance-based IDS (PIDS). Figure 3 depicts IDS types, their data sources, and the location of DL-IDS within those IDS types." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 52, + 128, + 425, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 128, + 425, + 139 + ], + "spans": [ + { + "bbox": [ + 52, + 128, + 425, + 139 + ], + "type": "text", + "content": "Definition 3.4. (NIDS). NIDS are IDS whose data sources are network traffic between hosts." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 146, + 443, + 207 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 146, + 443, + 207 + ], + "spans": [ + { + "bbox": [ + 42, + 146, + 443, + 207 + ], + "type": "text", + "content": "NIDS takes network traffic between hosts as its input. It is usually deployed at the edge or key node of the network, allowing it to secure the whole computer system with limited data. Benefiting from the global perception of the whole computer system, NIDS does well in large-scale multi-host intrusions such as Distributed Denial-of-Service (DDoS) attacks. However, NIDS performs poorly in intra-host intrusions and is difficult to analyze intrusions in the form of encrypted network traffic." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 52, + 212, + 415, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 212, + 415, + 224 + ], + "spans": [ + { + "bbox": [ + 52, + 212, + 415, + 224 + ], + "type": "text", + "content": "Definition 3.5. (HIDS). HIDS are IDS whose data sources are system events within hosts." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 230, + 441, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 230, + 441, + 301 + ], + "spans": [ + { + "bbox": [ + 42, + 230, + 441, + 301 + ], + "type": "text", + "content": "HIDS, in contrast, uncovers intrusions through system events of individual hosts. Its data sources include file system changes, system calls, process activities, etc. HIDS can conduct comprehensive detection for a host, and is not affected by encrypted data since the decryption is also performed in the host. Nevertheless, the deployment and maintenance of HIDS is relatively difficult. HIDS should be adapted to hosts of different operating systems and runtime environments. This simultaneously introduces computation overhead for the hosts." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 52, + 308, + 372, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 308, + 372, + 320 + ], + "spans": [ + { + "bbox": [ + 52, + 308, + 372, + 320 + ], + "type": "text", + "content": "Definition 3.6. (PIDS). PIDS are HIDS whose data sources are data provenance." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 326, + 441, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 326, + 441, + 350 + ], + "spans": [ + { + "bbox": [ + 42, + 326, + 441, + 350 + ], + "type": "text", + "content": "Definition 3.7. (Data Provenance). Data provenance refers to the origin and the processes that an event has undergone from its creation to its current state." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 357, + 441, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 357, + 441, + 416 + ], + "spans": [ + { + "bbox": [ + 42, + 357, + 441, + 416 + ], + "type": "text", + "content": "PIDS is a subtype of HIDS, particularly referring to HIDS that utilizes data provenance as its data source. Due to analysis in the intact trail of events, PIDS is proven effective in coping with advanced attacks [270]. By performing causality analysis on data provenance, PIDS can significantly reduce false alarms. Yet, data provenance is very expensive to obtain, requiring complicated technical tools for monitoring operating systems, network protocols, and applications." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 42, + 423, + 442, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 423, + 442, + 447 + ], + "spans": [ + { + "bbox": [ + 42, + 423, + 442, + 447 + ], + "type": "text", + "content": "Definition 3.8. (DL-IDS.) DL-IDS are IDS that utilize DL techniques to detect intrusions, whose data sources can be network traffic between hosts, system events within hosts, or their combination." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 42, + 453, + 443, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 453, + 443, + 513 + ], + "spans": [ + { + "bbox": [ + 42, + 453, + 443, + 513 + ], + "type": "text", + "content": "Unlike the other types of IDS such as NIDS and HIDS are categorized by their data sources, DL-IDS is defined by the techniques used in intrusion detection. As shown in Figure 3, the data source of DL-IDS can be network traffic, system events, or both. Taking advantage of the generalizability of DL techniques, DL-IDS is allowed to handle zero-day attacks precisely and thus become extremely interested in the cybersecurity community recently." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 43, + 522, + 160, + 532 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 522, + 160, + 532 + ], + "spans": [ + { + "bbox": [ + 43, + 522, + 160, + 532 + ], + "type": "text", + "content": "3.2 Common Workflow" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 537, + 442, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 537, + 442, + 561 + ], + "spans": [ + { + "bbox": [ + 42, + 537, + 442, + 561 + ], + "type": "text", + "content": "Figure 4 depicts the common workflow of DL-IDS. It usually consists of 7 steps: raw data, collection, storage, parsing, summarization, detection, and investigation, which are explained as follows:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 58, + 563, + 441, + 658 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 58, + 563, + 439, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 563, + 439, + 586 + ], + "spans": [ + { + "bbox": [ + 58, + 563, + 439, + 586 + ], + "type": "text", + "content": "- Raw Data is unprocessed data for uncovering attack details or benign system behaviors. The raw data analyzed by cyber experts commonly include network traffic and audit logs." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 58, + 588, + 441, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 588, + 441, + 609 + ], + "spans": [ + { + "bbox": [ + 58, + 588, + 441, + 609 + ], + "type": "text", + "content": "- Collection indicates data collection tools for different systems, such as cloud and cross-platforms, which gather valuable raw data to describe important system behavior scenarios." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 58, + 612, + 440, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 612, + 440, + 633 + ], + "spans": [ + { + "bbox": [ + 58, + 612, + 440, + 633 + ], + "type": "text", + "content": "- Storage involves storage and search engines to manage large amounts of collected log data. Log data is labeled with indexes for efficient retrieval." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 58, + 635, + 439, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 635, + 439, + 658 + ], + "spans": [ + { + "bbox": [ + 58, + 635, + 439, + 658 + ], + "type": "text", + "content": "- Parsing is the act of analyzing the stored logs and other useful data. It extracts and organizes the underlying information within the data for subsequent processing." + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 430, + 61, + 441, + 69 + ], + "type": "text", + "content": "1:5" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 52, + 92, + 433, + 301 + ], + "blocks": [ + { + "bbox": [ + 52, + 92, + 433, + 301 + ], + "lines": [ + { + "bbox": [ + 52, + 92, + 433, + 301 + ], + "spans": [ + { + "bbox": [ + 52, + 92, + 433, + 301 + ], + "type": "image", + "image_path": "7baacdfa9d3f131212e2cfa60a6a47974c5e8cc2cb426db45d3e0e1e40f66bc0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 171, + 317, + 313, + 329 + ], + "lines": [ + { + "bbox": [ + 171, + 317, + 313, + 329 + ], + "spans": [ + { + "bbox": [ + 171, + 317, + 313, + 329 + ], + "type": "text", + "content": "Fig. 4. Common workflow of DL-IDS." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 58, + 346, + 440, + 418 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 58, + 346, + 440, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 346, + 440, + 370 + ], + "spans": [ + { + "bbox": [ + 58, + 346, + 440, + 370 + ], + "type": "text", + "content": "- Summarization refers to the operation of summarizing large volumes of parsed data based on its semantics. This reduces storage costs while preserving critical events." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 58, + 370, + 440, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 370, + 440, + 393 + ], + "spans": [ + { + "bbox": [ + 58, + 370, + 440, + 393 + ], + "type": "text", + "content": "- Detection is the process of using detection tools such as models and algorithms to detect anomalies in analyzed data to determine whether the data contains intrusions." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 58, + 393, + 440, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 393, + 440, + 418 + ], + "spans": [ + { + "bbox": [ + 58, + 393, + 440, + 418 + ], + "type": "text", + "content": "- Investigation is the further process of Detection. It reconstructs the entire attack scenarios from the detected malicious data by analyzing the causal relationship between them." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 42, + 419, + 442, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 419, + 442, + 456 + ], + "spans": [ + { + "bbox": [ + 42, + 419, + 442, + 456 + ], + "type": "text", + "content": "Note that DL-IDS can also be performed in other step orders by skipping some of the steps. For example, log data can be first parsed before storage [135]. Attack investigation can be directly conducted without detection of intrusions [9]. This survey is organized by the common workflow." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 43, + 465, + 163, + 475 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 465, + 163, + 475 + ], + "spans": [ + { + "bbox": [ + 43, + 465, + 163, + 475 + ], + "type": "text", + "content": "4 DATA MANAGEMENT" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 42, + 479, + 442, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 479, + 442, + 504 + ], + "spans": [ + { + "bbox": [ + 42, + 479, + 442, + 504 + ], + "type": "text", + "content": "This section elaborates on the data management stage of DL-IDS, including data collection (Section 4.1), log storage (Section 4.2), and log parsing (Section 4.3)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 43, + 513, + 139, + 523 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 513, + 139, + 523 + ], + "spans": [ + { + "bbox": [ + 43, + 513, + 139, + 523 + ], + "type": "text", + "content": "4.1 Data Collection" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 527, + 441, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 527, + 441, + 599 + ], + "spans": [ + { + "bbox": [ + 42, + 527, + 441, + 599 + ], + "type": "text", + "content": "The first step of DL-IDS is to collect useful data from raw data. Raw data indicates records that document events, activities, and operations that occur within a system, application, or network (a.k.a., logs), represented by audit logs or application logs within hosts, or network traffic between hosts. By collecting useful logs, DL-IDS is allowed to monitor the health condition and operational status of information systems [141, 255]. Common attributes of logs include timestamp, event type, subject, object, description, etc." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 599, + 442, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 599, + 442, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 599, + 442, + 659 + ], + "type": "text", + "content": "On different platforms, logs possess different formats and organizational structures [21, 127, 255, 270]. To counter this, researchers have created various log collection tools specialized for various systems. For example, in Windows systems, Event Viewer is employed to manage system logs. Yet in Linux systems, log files are usually saved in the /var/log/ directory. The classification of data collection tools is shown in Table 1, including Windows, Linux, Cloud, and Cross platforms." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "type": "text", + "content": "1:6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 67, + 105, + 417, + 280 + ], + "blocks": [ + { + "bbox": [ + 146, + 84, + 337, + 95 + ], + "lines": [ + { + "bbox": [ + 146, + 84, + 337, + 95 + ], + "spans": [ + { + "bbox": [ + 146, + 84, + 337, + 95 + ], + "type": "text", + "content": "Table 1. Log collection tools on different platforms." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 67, + 105, + 417, + 280 + ], + "lines": [ + { + "bbox": [ + 67, + 105, + 417, + 280 + ], + "spans": [ + { + "bbox": [ + 67, + 105, + 417, + 280 + ], + "type": "table", + "html": "
Platform TypeToolDescription
Windows platformETW [153]Providing developers comprehensive event tracing ability
Panorama [245]Hardware-level and OS-aware dynamic taint tracking
Linux platformauditd [68]Native tools supported by the Linux kernel
sysdig [106]Focusing on runtime monitoring and fault troubleshooting
CamFlow [170]Self-contained, easily maintainable implementation
Tracee [210]Exposing system information as events based on eBPF
DataTracker [200]Monitoring unmodified binaries without their source codes
Inspector [206]Parallel provenance library that is POSIX-compliant
AutoLog [94]Analyzing programs so no need to run them
eAudit [193]Fast, scalable and easily deployable data collection tools
Cloud platformK8S tools [27, 87]Adapting to cloud scenarios to meet enterprise needs
saBPF [129]An extension tool of eBPF for containers in cloud computing
ISDC [158]Eliminating overheads on in-network resources
Cross platformDTrace [66]Real-time tracing framework that supports many platforms
SPADE [61]Novel provenance kernel for cross-platform logging
", + "image_path": "d77f3601530f6283f668e7c2a7916f80f8b6049a2d4b3f3fdea4dbac64ee1bf2.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 298, + 440, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 298, + 440, + 369 + ], + "spans": [ + { + "bbox": [ + 42, + 298, + 440, + 369 + ], + "type": "text", + "content": "4.1.1 Windows Platform Tools. Event Tracing for Windows (ETW) [153] is a powerful event tracing mechanism provided by Microsoft. It consists of three components: providers, controllers, and consumers. ETW instruments applications to provide kernel event logging and allows developers to start and stop event tracing sessions momentarily. Panorama [245] exploits hardware-level and OS-aware dynamic taint tracking to collect logs. Moreover, it develops a series of automated tests to detect malware based on several kinds of anomalous behaviors." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 377, + 441, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 377, + 441, + 616 + ], + "spans": [ + { + "bbox": [ + 42, + 377, + 441, + 616 + ], + "type": "text", + "content": "4.1.2 Linux Platform Tools. auditid [68] is a native collection tool supported by the Linux kernel, which is responsible for writing audit logs to disk and monitoring a variety of auditable events such as system calls, file accesses, and modifications. sysdig [106] relies on the kernel module to achieve monitoring and data collection of the system. sysdig focuses on system runtime monitoring and fault troubleshooting, which is also widely used in containers and cloud-native environments. CamFlow [170] designs a self-contained, easily maintainable implementation of whole-system provenance based on Linux Security Module, NetFilter, and other kernel facilities. Furthermore, it provides a mechanism to adapt the captured data provenance to applications and can be integrated across distributed systems. Tracee [210] takes advantage of the extended Berkeley Packet Filter (eBPF) framework to observe systems efficiently. It uses eBPF to tap into systems and expose that information as events. DataTracker [200] is an open-source data provenance collection tool using dynamic instrumentation. It is able to identify data provenance relations of unmodified binaries without access to or knowledge of the source codes. Inspector [206] is a Portable Operating System Interface (POSIX)-compliant data provenance library for shared-memory multi-threaded applications. It is implemented as a parallel provenance algorithm on a concurrent provenance graph. AutoLog [94] generates runtime log sequences by analyzing source codes and does not need to execute any programs. It can efficiently produce log datasets (e.g., over 10,000 messages/min on Java projects) and has the flexibility to adapt to several scenarios. eAudit [193] is a scalable and easily deployable data collection tools. eAudit relies on the eBPF framework built into recent Linux versions, making it work out of the box on most of the Linux distributions." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 622, + 443, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 622, + 443, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 622, + 443, + 659 + ], + "type": "text", + "content": "4.1.3 Cloud Platform Tools. Although some collection tools in Windows and Linux platforms such as auditd [68], sysdig [106], and Tracee [210] can be applied in cloud computing environment, cloud-native scenarios introduce different challenges compared with Windows or Linux platforms. First," + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 430, + 60, + 441, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 430, + 60, + 441, + 68 + ], + "spans": [ + { + "bbox": [ + 430, + 60, + 441, + 68 + ], + "type": "text", + "content": "1:7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 84, + 441, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 84, + 441, + 240 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 441, + 240 + ], + "type": "text", + "content": "there are many different types of components such as containers, microservices, and Kubernetes (K8S) clusters in cloud platforms, each of which generates its own logs with varying formats and contents. Additionally, components are basically characterized by dynamic expansion and contraction, making it hard to capture complete log data. To address them, Chen et al. [27] design a cloud log collection architecture on the basis of K8S, which is a central platform based on cloud-native technology. Josef et al. [87] propose a log collection and analysis tool operated as Software as a Service (SaaS) in the cloud environment in K8S technology, aiming to provide comprehensive logs across all microservices. saBPF [129] is an extension tool of eBPF, aiming to deploy fully-configurable, high-fidelity, system-level audit mechanisms at the granularity of containers. saBPF is further developed with proof-of-concept IDS and access control mechanism to demonstrate its practicability. ISDC [158] is designed to eliminate the bottleneck between network infrastructure (where data is generated) and security application servers (where data is consumed), which prioritizes specific flows to effectively optimize resource consumption." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 246, + 441, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 246, + 441, + 342 + ], + "spans": [ + { + "bbox": [ + 44, + 246, + 441, + 342 + ], + "type": "text", + "content": "4.1.4 Cross-platform Tools. To effectively detect intrusions, an intuitive idea is to incorporate log data from various platforms to obtain a global view of the running system. DTrace [66] is a real-time dynamic tracing framework for troubleshooting kernel and application problems on production systems. It supports many platforms, including Linux, Windows, Solaris, macOS, FreeBSD, NetBSD, etc. Support for Provenance Auditing in Distributed Environments (SPADE) [61] develops a novel provenance kernel that mediates between the producers and consumers of provenance information, and handles the persistent storage of records. It supports heterogeneous aggregating for system-level data provenance for data analysis across multiple platforms." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 352, + 122, + 363 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 352, + 122, + 363 + ], + "spans": [ + { + "bbox": [ + 44, + 352, + 122, + 363 + ], + "type": "text", + "content": "4.2 Log Storage" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 365, + 441, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 365, + 441, + 390 + ], + "spans": [ + { + "bbox": [ + 44, + 365, + 441, + 390 + ], + "type": "text", + "content": "The subsequent step of log collection is to store these logs [11, 40]. We will introduce two essential components for data storage: log storage systems and compression algorithms for these systems." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 396, + 441, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 396, + 441, + 563 + ], + "spans": [ + { + "bbox": [ + 44, + 396, + 441, + 563 + ], + "type": "text", + "content": "4.2.1 Log Storage Systems. The two most commonly used log storage systems are ELK [5] and Loki [15]. ELK is a powerful log management solution consisting of three open-source software components: Elasticsearch [48], Logstash [47], and Kibana [49]. Elasticsearch [48] is the leading distributed, RESTful search and analytics data engine designed with speed and scalability. Logstash [47] is a server-side data preprocessing pipeline to collect and integrate data from multiple sources. Kibana [49] is a data analytics and visualization platform at both speed and scale. ELK is powerful enough to be applied in enterprise scenarios, however, its performance comes at a price. ELK sacrifices ease of configuration and installation, and may simultaneously introduce severe runtime overhead for its hosts. In contrast, Loki [15] is a lightweight logging system with low resource overhead developed by Grafana Labs. It is designed with simple operations and efficient storage. Instead of indexing everything of data like ELK does, Loki mainly creates indices grounded in log labels. Moreover, Loki is well suited for open-source monitoring and visualization tools such as Prometheus [174] and Grafana [112]. Integrating these two tools enables Loki to construct a complete monitoring and log analysis platform for information systems." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 569, + 441, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 569, + 441, + 617 + ], + "spans": [ + { + "bbox": [ + 44, + 569, + 441, + 617 + ], + "type": "text", + "content": "4.2.2 Log Compression Algorithms. Logs are generated quickly and require significant memory usage. For example, it is measured that a browser can produce about 10 GB of log data each day [40]. Such oversize data should be compressed before storage. Log compression algorithms can be categorized into two types: general-purpose algorithms and those specifically adapted to log data." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 623, + 441, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 623, + 441, + 659 + ], + "spans": [ + { + "bbox": [ + 44, + 623, + 441, + 659 + ], + "type": "text", + "content": "General Compression Algorithms. General compression algorithms refer to algorithms to reduce the size of data (e.g., log data) by handling token-level or byte-level duplicates in the data. General compression algorithms can be classified into three categories based on their principles [242]:" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 55, + 68 + ], + "type": "text", + "content": "1:8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 86, + 105, + 399, + 159 + ], + "blocks": [ + { + "bbox": [ + 104, + 84, + 380, + 95 + ], + "lines": [ + { + "bbox": [ + 104, + 84, + 380, + 95 + ], + "spans": [ + { + "bbox": [ + 104, + 84, + 380, + 95 + ], + "type": "text", + "content": "Table 2. Well-acknowledged general compression algorithms for log data." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 86, + 105, + 399, + 159 + ], + "lines": [ + { + "bbox": [ + 86, + 105, + 399, + 159 + ], + "spans": [ + { + "bbox": [ + 86, + 105, + 399, + 159 + ], + "type": "table", + "html": "
TypeWell-acknowledged compression algorithm
Dictionary-basedLZ77 in gzip [55], LZMA in 7zip_lzma [171], and LZSS in quickLZ [177]
Sorting-basedBWT in zip2 [194] andST in szip [190]
Statistical-basedPPMD in 7zip(ppmd and DMC in ocamyd [191]
", + "image_path": "9040e5a3f950e1068d51ac6479bef0d55230b78324d6f4f477c0fa04f8c2b271.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 58, + 176, + 441, + 236 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 58, + 176, + 440, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 176, + 440, + 200 + ], + "spans": [ + { + "bbox": [ + 58, + 176, + 440, + 200 + ], + "type": "text", + "content": "- Dictionary-based Compression: It records repeated data as keys and replaces these data with their corresponding keys." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 58, + 200, + 441, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 200, + 441, + 212 + ], + "spans": [ + { + "bbox": [ + 58, + 200, + 441, + 212 + ], + "type": "text", + "content": "- Sorting-based Compression: It sorts data to enable strategies that require ordering features." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 58, + 212, + 441, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 212, + 441, + 236 + ], + "spans": [ + { + "bbox": [ + 58, + 212, + 441, + 236 + ], + "type": "text", + "content": "- Statistical-based Compression: It exploits statistical techniques to learn and predict the possible next token for existing tokens. The data is thus compressed as a statistical model." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 42, + 239, + 442, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 239, + 442, + 299 + ], + "spans": [ + { + "bbox": [ + 42, + 239, + 442, + 299 + ], + "type": "text", + "content": "Table 2 presents representative algorithms of the above three types. Due to the indeterminacy of statistical techniques, statistical-based compression algorithms may introduce losses in compression. Yet the other two types of algorithms are generally lossless. By validating 9 log files and 2 natural language files, a study [242] shows that some general compression algorithms can achieve high compression ratios for log data and log data is even easier to compress than natural language data." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 305, + 442, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 305, + 442, + 532 + ], + "spans": [ + { + "bbox": [ + 42, + 305, + 442, + 532 + ], + "type": "text", + "content": "Tailored Compression Algorithms. Different from natural language data, log data usually has specific structures and formal expressions that help further compression. Yao et al. [243] propose LogBlock, which obtains small log blocks before compression and then uses a generic compressor to compress logs. Liu et al. [135] propose Logzip, which employs clustering algorithms to iteratively extract templates from raw logs and then obtain coherent intermediate representations for compressing logs. Rodrigues et al. [186] propose the lossless compression tool CLP, aiming to quickly retrieve log data while meeting compression requirements. CLP proposes to combine domain-specific compression and search with a generic lightweight compression algorithm. Li et al. [123] conduct empirical research on log data and propose LogShrink to overcome their observed limitations by leveraging the commonality and variability of log data. LogBlock [243] is designed to help existing jobs perform better. It reduces duplicate logs by preprocessing log headers and rearranging log contents, thereby improving the compression ratio of log files. LogReduceer [247] is a framework that combines log hotspot identification and online dynamic log filtering. Its non-intrusive design significantly reduces log storage and runtime overhead. " + }, + { + "bbox": [ + 42, + 305, + 442, + 532 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 42, + 305, + 442, + 532 + ], + "type": "text", + "content": "Slope [217] is a compression and search method for semi-structured log data. It achieves efficient storage and query performance through data segmentation, pattern extraction, and index-free design. Denum [249] significantly improves log compression rates by optimizing the compression of digital tokens in logs. It is an efficient log compression tool suitable for scenarios where you need to save storage space or transmission bandwidth." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 43, + 542, + 123, + 554 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 542, + 123, + 554 + ], + "spans": [ + { + "bbox": [ + 43, + 542, + 123, + 554 + ], + "type": "text", + "content": "4.3 Log Parsing" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 42, + 556, + 442, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 556, + 442, + 630 + ], + "spans": [ + { + "bbox": [ + 42, + 556, + 442, + 630 + ], + "type": "text", + "content": "Log data often originates from multiple different devices such as terminals, sensors, and network devices. To analyze it, log parsers are employed to format them into structured and unified ones. Log parsing is usually executed by data classification and template extraction. Data classification is to classify log data into several groups. Each group constitutes a template for extracting features from log data and constructing the structured logs. As shown in Figure 5, the existing log parsers can be taxonomized into 3 categories: clustering-based, pattern-based, and heuristic-based parsers." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 635, + 441, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 635, + 441, + 660 + ], + "spans": [ + { + "bbox": [ + 42, + 635, + 441, + 660 + ], + "type": "text", + "content": "4.3.1 Clustering-based Parsing. Clustering-based parsers classify data using clustering algorithms for log parsing. Xiao et al. [226] propose LPV, which employs a hierarchical clustering algorithm" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 430, + 60, + 441, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 430, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 430, + 60, + 441, + 69 + ], + "type": "text", + "content": "1:9" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 84, + 425, + 159 + ], + "blocks": [ + { + "bbox": [ + 63, + 84, + 425, + 159 + ], + "lines": [ + { + "bbox": [ + 63, + 84, + 425, + 159 + ], + "spans": [ + { + "bbox": [ + 63, + 84, + 425, + 159 + ], + "type": "image", + "image_path": "e247a0d348b36b7d21437e7121af02634601f140eb5eb301754a9955423acc68.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 177, + 172, + 307, + 184 + ], + "lines": [ + { + "bbox": [ + 177, + 172, + 307, + 184 + ], + "spans": [ + { + "bbox": [ + 177, + 172, + 307, + 184 + ], + "type": "text", + "content": "Fig. 5. Taxonomy of data parsing." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 202, + 442, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 202, + 442, + 288 + ], + "spans": [ + { + "bbox": [ + 42, + 202, + 442, + 288 + ], + "type": "text", + "content": "to incrementally group logs based on Euclidean distance. Hamooni et al. [74] present a rapid log pattern recognition approach named LogMine. It is implemented in the map-reduce framework for distributed platforms to process millions of log messages in seconds. LogCluster [130] reduces the number of logs that need to be manually checked and improves the accuracy of problem identification through log clustering and the use of knowledge bases. METING [32] provides a robust and efficient log parsing method through frequent n-gram mining and flexible log grouping strategy, which can effectively process various types of log data." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 293, + 442, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 293, + 442, + 426 + ], + "spans": [ + { + "bbox": [ + 42, + 293, + 442, + 426 + ], + "type": "text", + "content": "4.3.2 Frequency-based Parsing. Frequency-based parsers discover patterns that exceed the frequency threshold and employ the mined patterns to parse logs. Sedki et al. [192] propose the log parsing tool ULP, which combines string matching and local frequency analysis to efficiently parse large log files. Dai et al. [35] propose Logram, which utilizes an n-gram dictionary for log parsing. For n-grams with a frequency below the threshold, Logram recursively converts to (n-1)-grams until a list of uncommon 2-grams is obtained. To mitigate the parameter sensitivity issue in log parsers, Dai et al. [36] further proposed an entropy-based log parser PILAR, which balances parsing accuracy and efficiency. Xu et al. [229] propose a hybrid log parsing model called Hue, which performs parsing through user-adaptive methods. Prefix-Graph [30] is an efficient, adaptive, and universal log parsing method that can stably extract log templates without relying on domain knowledge and manual parameter tuning." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 431, + 442, + 600 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 431, + 442, + 600 + ], + "spans": [ + { + "bbox": [ + 42, + 431, + 442, + 600 + ], + "type": "text", + "content": "4.3.3 Heuristic-based Parsing. Heuristic-based parsers rely on empirical knowledge to classify log data. He et al. [82] propose the online log parsing method Drain, which employs a depth-fixed parsing tree to group the original logs and encodes them using specially designed parsing rules. Le et al. [114] propose to use a hint-based few-sample learning algorithm, LogPPT, to capture log template patterns. Utilizing new prompt tuning methods and an adaptive random sampling algorithm, LogPPT performs well on multiple public datasets. Liu et al. [137] propose the UniParser parser to address the issue of difficult processing of heterogeneous logs, using the Token Encoder and Context Encoder modules to learn log context features. Spell [44] is an efficient streaming log parsing method that can dynamically extract log patterns in online processing and significantly improve processing efficiency through pre-filtering steps. Logan [3] achieves efficient and scalable log parsing through distributed processing, LCS matching, dynamic matching tolerance, and periodic merging. USTEP [214] is an online log parsing method based on an evolutionary tree structure that can discover and encode new parsing rules. It achieves constant parsing time and can efficiently parse raw log messages in a streaming manner." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 43, + 608, + 176, + 619 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 608, + 176, + 619 + ], + "spans": [ + { + "bbox": [ + 43, + 608, + 176, + 619 + ], + "type": "text", + "content": "5 INTRUSION DETECTION" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 623, + 442, + 659 + ], + "type": "text", + "content": "The intrusion detection stage uncovers intrusions relying on the semantic-level information. This section classifies and summarizes the mainstream graph summarization (Section 5.1), attack detection (Section 5.2), and attack investigation (Section 5.3)." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 58, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 58, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 58, + 69 + ], + "type": "text", + "content": "1:10" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 50, + 105, + 435, + 270 + ], + "blocks": [ + { + "bbox": [ + 138, + 84, + 346, + 95 + ], + "lines": [ + { + "bbox": [ + 138, + 84, + 346, + 95 + ], + "spans": [ + { + "bbox": [ + 138, + 84, + 346, + 95 + ], + "type": "text", + "content": "Table 3. Overview of graph summarization approaches." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 50, + 105, + 435, + 270 + ], + "lines": [ + { + "bbox": [ + 50, + 105, + 435, + 270 + ], + "spans": [ + { + "bbox": [ + 50, + 105, + 435, + 270 + ], + "type": "table", + "html": "
ModeApproachReleaseBaselineRequirement
OfflineProvCompress [228]2011No SummarizationNone
BEEP [115]2013No SummarizationInstrumentation
LogGC [116]2013BEEP + No SummarizationInstrumentation
CPR + PCAR [234]2016No SummarizationNone
FD + SD [89]2018CPR + PCARNone
LogApprox [152]2020GC + CPR + DPRNone
TeRed [122]2025LogGC + CPR + PCAR + F-DPR + NodeMergeNone
OnlineProTracer [143]2016BEEP + No SummarizationInstrumentation
NodeMerge [205]2018No SummarizationNone
Winnower [77]2018No SummarizationNone
GS + SS [267]2021FD + SDNone
SEAL [53]2021FDNone
FAuST [97]2022CPR + DPRNone
AudiTrim [202]2024CPR + GS + F-DPRNone
", + "image_path": "dcec85fafc03c55d917b54a234d99e02c0338d0d6ed1ae0535780c1185341cbd.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 43, + 282, + 170, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 282, + 170, + 293 + ], + "spans": [ + { + "bbox": [ + 43, + 282, + 170, + 293 + ], + "type": "text", + "content": "5.1 Graph Summarization" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 296, + 442, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 296, + 442, + 403 + ], + "spans": [ + { + "bbox": [ + 42, + 296, + 442, + 403 + ], + "type": "text", + "content": "It is illustrated that stealthy malware will inevitably interact with the underlying OS and be captured by provenance monitoring systems [216], which is the reason why PIDS (a form of DL-IDS) has worked and flourished recently. Log data generated from provenance monitoring systems is referred to as data provenance as mentioned. Offering advantages in high precision, data provenance sacrifices memory performance to record all trails of events from their creations to their current states, even some of which are trivial. Unlike network traffic and application logs, data provenance is fine-grained, detailed, and rich in semantics. As a result, the token-level or byte-level log storage systems (Section 4.2.1) and log compression algorithms (Section 4.2.2) are insufficient to handle the memory efficiency of data provenance due to the absence of semantic-level information." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 404, + 442, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 404, + 442, + 452 + ], + "spans": [ + { + "bbox": [ + 42, + 404, + 442, + 452 + ], + "type": "text", + "content": "To this end, graph summarization is investigated to further reduce the size of log data semantically. In graph summarization, data provenance is transformed into a provenance graph, of which the causal relations are utilized to build the semantic understanding of system activities. Referring to the definition of data provenance (Definition 3.7), provenance graph is defined as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "spans": [ + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "text", + "content": "Definition 5.1. (Provenance Graph). Provenance graph is a representation of a collection of data provenance with causal relations. It is a directed acyclic graph " + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "inline_equation", + "content": "G = \\langle V, E \\rangle" + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "text", + "content": " where nodes " + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "text", + "content": " are system entities and edges " + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 42, + 457, + 441, + 492 + ], + "type": "text", + "content": " are system events." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 498, + 442, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 498, + 442, + 607 + ], + "spans": [ + { + "bbox": [ + 42, + 498, + 442, + 607 + ], + "type": "text", + "content": "Provenance graphs allow graph summarization approaches to reduce the size of log data by confidently removing irrelevant events, aggregating similar events, gathering similar execution entities, etc. This categorizes them as a type of lossy reduction, yet the aforementioned log storage and compression are usually lossless (except for statistical-based log compression). We note that some surveys (e.g., [96, 270]) may interchangeably use graph summarization and log compression to identify the approaches that reduce the size of log data. In this work, we explicitly distinguish them and refer to the lossless reduction as compression and the opposite one as summarization. Table 3 presents the overview of graph summarization approaches. We classify them into two categories: offline graph summarization and online graph summarization." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 611, + 442, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 611, + 442, + 660 + ], + "spans": [ + { + "bbox": [ + 42, + 611, + 442, + 660 + ], + "type": "text", + "content": "5.1.1 Offline Graph Summarization. Offline graph summarization requires historical log data to provide global knowledge, which extracts log data from persistent storage, summarizes the data, and pushes back the summarized data to the persistent storage. In 2011, Xie et al. [228] take inspiration from web graphs to summarize provenance graphs. They argue that provenance" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:11" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 84, + 440, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 84, + 440, + 264 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 440, + 264 + ], + "type": "text", + "content": "graphs have similar organizational structure and characteristics to web graphs, such as locality, similarity, and consecutiveness. BEEP [115] is developed based on the fact that a long-running execution can be partitioned into individual units. BEEP reverse engineers application binaries and instructions to perform selective logging for unit boundaries and unit dependencies. LogGC [116] is a summarized audit log system that can be invoked at any time during the system execution. Xu et al. [234] propose an aggregation algorithm PCR that preserves event dependencies during log data reduction. They further propose an algorithm named PCAR that utilizes domain knowledge to conduct graph summarization. Hossain et al. [89] propose two dependency-preserving graph summarization approaches, FD and SD. FD is allowed to keep backward and forward forensic analysis results. SD preserves the results of common forensic analysis, which runs backward to find the entry points of intrusions and then runs forward from these points to unveil their impacts. LogApprox [152] aims to summarize the most space-intensive events found in logs, namely file I/O activity, which can account for up to " + }, + { + "bbox": [ + 44, + 84, + 440, + 264 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 44, + 84, + 440, + 264 + ], + "type": "text", + "content": " of the log content. TeRed [122] employs unit tests to learn the system's normal behavior patterns for reducing provenance graphs, allowing it not to impact attack detection and investigation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 282, + 440, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 282, + 440, + 522 + ], + "spans": [ + { + "bbox": [ + 44, + 282, + 440, + 522 + ], + "type": "text", + "content": "5.1.2 Online Graph Summarization. Online graph summarization performs real-time summarization for continually coming provenance graphs, rather than dealing with a static provenance graph. ProTracer [143] alternates between system event logging and unit-level taint propagation. It has a lightweight kernel module and user space daemon for concurrent, out-of-order event processing. NodeMerge [205] is a template-based graph summarization system for online event storage. It can directly work on the system-dependent provenance streams and compress data provenance via read-only file access patterns. Winnower [77] is an extensible audit-based cluster monitoring system. For tasks replicated across nodes in distributed applications, it can define a model over audit logs to concisely summarize the behaviors of multiple nodes, thus eliminating the necessity of transmitting redundant audit records to the central monitoring node. The approach proposed by Zhu et al. [267] includes two real-time graph summarization strategies. The first strategy maintains global semantics, which identifies and removes redundant events that do not affect global dependencies. The second strategy is based on suspicious semantics. SEAL [53] is a novel graph summarization approach for causal analysis. Based on information-theoretic observations of system event data, it achieves lossless compression and supports real-time historical event retrieval. FAuST [97] is a logging daemon that performs transparent and modular graph summarization directly on system endpoints. FAuST consists of modular parsers that parse different audit log formats to create a unified in-memory provenance graph representation. AudiTrim [202] is an efficient graph summarization approach that reduces log sizes without impacting user experiences, which allows adaptable deployment on different operating systems." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 542, + 143, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 542, + 143, + 553 + ], + "spans": [ + { + "bbox": [ + 44, + 542, + 143, + 553 + ], + "type": "text", + "content": "5.2 Attack Detection" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 558, + 440, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 558, + 440, + 605 + ], + "spans": [ + { + "bbox": [ + 44, + 558, + 440, + 605 + ], + "type": "text", + "content": "Attack detection is located at the central position of DL-IDS. The objective of attack detection is to accurately identify malicious system events in log data while minimizing false alarms of normal system behaviors. Based on the types of log data, we categorize the attack detection approaches into audit log-based, application log-based, network traffic-based, and hybrid log-based detectors." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 605, + 440, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 605, + 440, + 653 + ], + "spans": [ + { + "bbox": [ + 44, + 605, + 440, + 653 + ], + "type": "text", + "content": "The overview and taxonomy of attack detection approaches are presented in Table 4. We note that recent years have also published many other academic papers for attack detection [25, 46, 78, 119, 156, 218, 224, 227, 248]. Yet these papers are slightly related to DL-IDS, which are thus excluded in our survey for conciseness." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:12" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 440, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 440, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 440, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 44, + 673, + 248, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 673, + 248, + 681 + ], + "spans": [ + { + "bbox": [ + 44, + 673, + 248, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 46, + 105, + 438, + 639 + ], + "blocks": [ + { + "bbox": [ + 120, + 84, + 363, + 95 + ], + "lines": [ + { + "bbox": [ + 120, + 84, + 363, + 95 + ], + "spans": [ + { + "bbox": [ + 120, + 84, + 363, + 95 + ], + "type": "text", + "content": "Table 4. Overview and taxonomy of attack detection approaches." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 46, + 105, + 438, + 639 + ], + "lines": [ + { + "bbox": [ + 46, + 105, + 438, + 639 + ], + "spans": [ + { + "bbox": [ + 46, + 105, + 438, + 639 + ], + "type": "table", + "html": "
Data TypeTaxonomyApproachRelease TimeBase ModelDetection StyleDetection Granularity
Audit LogTraditional LearningStreamSpot [145]2018K-MedoidsOnlineSubgraph
Unicorn [76]2020K-MedoidsOnlineNode, Subgraph
DistDet [42]2023HSTOnlineSubgraph
Velox [18]2025FCNOnlineNode
Graph Neural NetworkShadeWatcher [250]2022TransROfflineNode
threaTrace [219]2022GraphSAGEOnlineNode
ProGrapher [237]2023graph2vecOnlineSubgraph
MAGIC [99]2024GATOnlineNode, Subgraph
Flash [182]2024GraphSAGEOnlineNode
R-caid [65]2024GNNOfflineNode
Argus [230]2024MPNN, GRU-Node
TAPAS [252]2025LSTM-GRUOnlineTask
Application LogTraditional LearningWei et al. [231]2009PCA, TF-IDF-Log Entry
Bodik et al. [19]2010Logistic RegressionOnlineLog Entry
AMOD [43]2018SVM HYBRIDOnlineLog Entry
Sequence Neural NetworkDeepLog [45]2017LSTMOnlineLog Entry
LogRobust [257]2019Attention LSTM-Log Entry
LogAnomaly [151]2019template2vec, LSTMOnlineLog Entry
LogC [246]2020LSTMOnlineLog Entry
NeuralLog [113]2021BERT-Log Entry
PLELog [238]2021Attention GRUOnlineLog Entry
SpikeLog [175]2023DSNN-Log Entry
LogCraft [254]2024Meta Learning-Log Entry
Tweezers [33]2024GATv2, BERTweetOnlineLog Entry
LogSer [23]2024BERTOnlineLog Entry
LogDLR[265]2025Transformer, SBERTOnlineLog Entry
Traffic LogTraditional LearningNetPro [121]2017Merkle Hash TreeOnlineRoute
CATH [72]2019Cusp ModelOnlineFlow
Whisper [56]2021K-Means-Host
SigML++ [211]2023ANN-Encrypted Log
OADSD [253]2023Isolation ForestOnlinePacket
LtRFT [204]2023LambdaMARTOfflinePacket
AGC [225]2025Clustering-Packet
Graph and Sequence Neural NetworkKitsune [159]2018AutoEncoderOnlinePacket
MT-FlowFormer [260]2022Transformer-Flow
I²RNN [199]2022I²RNN-Packet
ERNN [262]2022ERNN-Flow
Euler [108]2023GNN, RNN-Flow
pVoxel [58]2023--Packet, Flow
NetVigil [91]2024E-GraphSage-Flow
Exosphere [57]2024CNN-Packet
DFNet [263]2024DFNet-Packet
RFH-HELAD [264]2024RPGAN, Deep kNN-Packet
ReTrial [259]2024Bayesian InferenceOnlineFlow
HEN [221]2024AE-LSTM-Packet, Flow
TCG-IDS [222]2025TGNOnlineFlow
A-NIDS[251]2025Stacked CTGANOnlineFlow
GTAE-IDS[62]2025Graph TransformerOnlinePacket, Flow
HybridHybridOWAD [75]2024AutoencoderOnlineHybrid
FG-CIBGC [165]2025DisenGCN, ICL-Hybrid
", + "image_path": "12bf721c67f5ee3bf4f30ea97ecba8aaa579e91d4838ce00894cb8540fa17426.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:13" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 442, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 442, + 133 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 442, + 133 + ], + "type": "text", + "content": "5.2.1 Audit Log-based Detectors. Audit logs are collected from hosts and thus detectors based on them are basically referred to as HIDS. Audit logs provide fine-grained information through provenance graphs to depict system behaviors. Depending on the learning techniques, audit log-based detectors can be further classified as traditional learning and graph neural network." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 138, + 442, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 138, + 442, + 247 + ], + "spans": [ + { + "bbox": [ + 42, + 138, + 442, + 247 + ], + "type": "text", + "content": "Traditional Learning. Traditional learning-based detectors refer to those that utilize naive machine learning techniques. StreamSpot [145] is a clustering-based anomaly detection that tackles challenges in heterogeneity and streaming nature. Unicorn [76] is a real-time intrusion detector that efficiently constructs a streaming histogram to represent the history of system executions. The counting results within the histogram are updated immediately if new edges (or events) occur. DistDet [42] is a distributed detection system that builds host models in the client side, filters false alarms based on their semantics, and derives global models to complement the host models. Velox [18] derives from Orthrus and replaces the complex TGN-based encoder with a simple fully-connected network (FCN), leading to a lightweight and efficient neural network." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 252, + 442, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 252, + 442, + 552 + ], + "spans": [ + { + "bbox": [ + 42, + 252, + 442, + 552 + ], + "type": "text", + "content": "Graph Neural Network. GNN is demonstrated to do well in processing provenance graphs [99, 182, 219, 237, 250]. ProGrapher [237] extracts temporal-ordered provenance graph snapshots from the ingested logs, and applies whole graph embedding and sequence-based learning to capture rich structural properties of them. The key GNN technique leveraged by ProGrapher is graph2vec. ShadeWatcher [250] is a recommendation-guided intrusion detector using provenance graphs. It borrows the recommendation concepts of user-item interactions into security concepts of system entity interactions and analyzes cyber threats in an automated and adaptive manner. threaTrace [219] emerges as an online approach dedicated to detecting host-based threats at the node level. Its GNN model is a tailored GraphSAGE [73] for learning rich contextual information in provenance graphs. MAGIC [99] leverages Graph Attention Network (GAT) [213] as its graph representation module. MAGIC employs masked graph representation learning to incorporate the capability of pretraining. It can adapt to concept drift with minimal computational overhead, making it applicable to real-world online APT detection. Flash [182] is a comprehensive and scalable approach on data provenance graphs to overcome the limitations in accuracy, practicality, and scalability. Flash incorporates a novel adaptation of a GNN-based contextual encoder to encode both local and global graph structures into node embeddings efficiently. R-caid [65] first incorporates root cause analysis into PIDS. Before training GNNs, R-caid links nodes to their root causes to build a new graph, intending to prevent it from mimicry and evasion attacks. Argus [230] finds the performance of the prior IDS is questionable on large scale. It thus devises a form of discrete temporal graph and uses encoder-decoder unsupervised learning to detect different types of attacks. TAPAS [252] leverages a stacked LSTM-GRU model and a task-guided segmentation algorithm to reduce the spatiotemporal dimensions of APT detection, achieving efficient, low-cost, and accurate detection. In addition to the aforementioned detectors, recent researchers have developed numerous useful tools for better understanding audit logs, such as data visualization analysis tool [133] and counterfactual-driven attack explanation generator [223]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 558, + 442, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 558, + 442, + 594 + ], + "spans": [ + { + "bbox": [ + 42, + 558, + 442, + 594 + ], + "type": "text", + "content": "5.2.2 Application Log-based Detectors. Application logs are generated from the installed binaries. Generally, application logs are in the form of natural language text, namely sequence data. It is thus common to introduce sequence-based DL techniques into application log-based DL-IDS." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 599, + 442, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 599, + 442, + 660 + ], + "spans": [ + { + "bbox": [ + 42, + 599, + 442, + 660 + ], + "type": "text", + "content": "Traditional Learning. For traditional learning, Wei et al. [231] propose a general methodology to mine rich semantic information in console logs to detect large-scale system problems. Bodik et al. [19] leverage a logistic regression model on a new and efficient representation of a datacenter's state called fingerprint to detect previously seen performance crises in that datacenter. AMOD [43] uses the SVM HYBRID strategy to filter query annotations from web request logs and then" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 440, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 440, + 109 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 440, + 109 + ], + "type": "text", + "content": "update the stacked generalization detection model to efficiently detect web code injection attacks and obtain malicious queries to update the web application firewall (WAF) library." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 116, + 442, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 116, + 442, + 440 + ], + "spans": [ + { + "bbox": [ + 42, + 116, + 442, + 440 + ], + "type": "text", + "content": "Sequence Neural Network. Due to the similarity between application logs and natural language texts, sequence neural networks such as Recurrent Neural Network [86] and Transformer [39, 212] are widely employed. DeepLog [45] employs LSTM to model system logs as natural language sequences. It is able to automatically learn benign log patterns and detect anomalies when there is a deviation between log patterns and the trained model. LogRobust [257] finds previous methods do not work well under the close-world assumption and utilizes an attention-based LSTM model to handle unstable log events and sequences. LogAnomaly [151] identifies previous studies tend to cause false alarms by using indexes rather than semantics of log templates. Empowered by a novel, simple yet effective method termed template2vec, LogAnomaly is proven to successfully detect both sequential and quantitative log anomalies simultaneously. LogC [246] is a new log-based anomaly detection approach with component-aware analysis. It feeds both log template sequences and component sequences to train a combined LSTM model for detecting anomalous logs. NeuralLog [113] targets the performance caused by log parsing errors such as out-of-vocabulary words and semantic misunderstandings and employ BERT to perform neural representation. PLELog [238] is a semi-supervised anomaly detection approach that can get rid of time-consuming manual labeling and incorporate the knowledge on historical anomalies. SpikeLog [175] adopts a weakly supervised approach to train an anomaly score model, with the objective of handling a more reasonable premise scenario where a large number of logs are unlabeled. LogCraft [254] is an end-to-end unsupervised log anomaly detection framework based on automated machine learning, which mitigates the cost of understanding datasets and makes multiple attempts for building algorithms. Tweezers [33] uses a large language model to identify entities and build a relationship graph, and generates embeddings through graph attention network optimization to achieve security incident detection. LogSer [23] parses logs by preprocessing parameters, splitting logs, tree parsing, and template merging. It then inputs relevant embeddings into BERT training to detect anomalies, generate reports, and perform incremental updates. LogDLR [265] uses SBERT embeddings and a Transformer autoencoder with domain adversarial training to learn domain-invariant features, detecting anomalies via reconstruction error." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 447, + 442, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 447, + 442, + 494 + ], + "spans": [ + { + "bbox": [ + 42, + 447, + 442, + 494 + ], + "type": "text", + "content": "5.2.3 Network Traffic-based Detectors. Network traffic comes from communications between hosts across a computer network. It is ruled by network protocols such as Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) and can be utilized for intrusion detection. Basically, network traffic-based detectors are termed NIDS." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 503, + 443, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 503, + 443, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 503, + 443, + 659 + ], + "type": "text", + "content": "Traditional Learning. Given the fact that network traffic is usually encrypted for secure communications, feature engineering-guided machine learning is widely applied in NIDS. NetPro [121] employs traceability reasoning with Merkle Hash Trees and digital signatures to detect direct and indirect MANET routing attacks while preserving node privacy, and outputs a traceability graph to identify malicious nodes and behaviors. CATH [72] is a catastrophe-theory-based approach for DoS detection in software-defined networks (SDNs), which leverages the selection, normalization, and fusion of statistical flow attributes to model network states. Whisper [56] pays attention to both high accuracy and high throughput by utilizing frequency domain features. SigML++ [211] is an extension of SigML for supervised anomaly detection approach. SigML++ employs Fully Homomorphic Encryption and Artificial Neural Network (ANN) for detection, resulting in execution without decrypting the logs. OADSD [253] achieves task independently and has the ability of adapting to the environment over SD-WAN by using On-demand Evolving Isolation Forest. LtRFT [204] innovatively introduces Learning-To-Rank scheme for mitigating the low-rate DDoS" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 60, + 441, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 426, + 60, + 441, + 69 + ], + "type": "text", + "content": "1:15" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 84, + 440, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 84, + 440, + 120 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 440, + 120 + ], + "type": "text", + "content": "attacks targeted at flow tables. AGC [225] maps the original data into the embedding space through embedding learning to obtain more representative anchor points, thus achieving fine-grained classification of low-quality label data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "spans": [ + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "text", + "content": "Graph and Sequence Neural Network. In network traffic, packets consist of various contents and their flows can be represented as graphs. As a result, both graph neural network and sequence neural network are adopted in NIDS. Kitsune [159] is a plug and play NIDS that is allowed to detect attacks efficiently on the local network without supervision. It alleviates the problem that network gateways and router devices simply do not have the memory or processing power. MT-FlowFormer [260] is a semi-supervised framework to mitigate the lack of a mechanism for modeling correlations between flows and the requirement of a large volume of manually labeled data. " + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "inline_equation", + "content": "\\mathrm{I}^2\\mathrm{RNN}" + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "text", + "content": " [199] is an incremental and interpretable RNN for encrypted traffic classification, which can be efficiently adapted for incremental traffic types. ERNN [262] represents error-resilient RNN, which is a robust and end-to-end RNN model specially designed against network-induced phenomena. Euler [108] accelerates the most memory-intensive part, message-passing stage within GNN, with several concurrently-executed replicated GNNs. pVoxel [58] is an unsupervised method that proposes to leverage point cloud analysis to reduce false positives for the previous NIDS such as Whisper and Kitsune without requiring any prior knowledge on the alarms. NetVigil [91] is specially designed for east-west traffic within data center networks. It utilizes E-GraphSage and contrastive learning techniques to strengthen its resilience. Exosphere [57] detects flooding attacks by analyzing packet length patterns, without investigating any information in encrypted packets. DFNet [263] is a DDoS prevention paradigm denoted by preference-driven and in-network enforced shaping. RFH-HELAD [264] consists of a " + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "text", + "content": " classification model based on a deep neural network and a " + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "inline_equation", + "content": "K + 1" + }, + { + "bbox": [ + 44, + 125, + 440, + 486 + ], + "type": "text", + "content": " classification combining GAN and Deep kNN for detecting anomalies in network traffic. ReTrial [259] employs an improved graph attention network with Bayesian and EM algorithms to iteratively correct misleading links, enabling robust detection of encrypted malicious traffic. HEN [221] uses SMOTE to enhance data, trains LightGBM, generates explanations via SHAP, trains AE-LSTM to reconstruct SHAP values, sets a threshold from training errors, and marks test traffic with excess errors as attacks for intrusion detection. TCG-IDS [222] is the first self-supervised temporal contrastive GNN for network intrusion detection, capturing spatiotemporal traffic dependencies with high accuracy and low false alarms. A-NIDS [251] uses a shallow fully connected network for real-time detection and a Stacked CTGAN generator to address catastrophic forgetting and old data storage costs. GTAE-IDS [62] uses a graph autoencoder with a Transformer encoder and DNN decoder to learn benign traffic, enabling label-free, near-real-time intrusion detection and new attack identification." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 491, + 440, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 491, + 440, + 575 + ], + "spans": [ + { + "bbox": [ + 44, + 491, + 440, + 575 + ], + "type": "text", + "content": "5.2.4 Hybrid Log-based Detectors. Based on the above discussions, a natural idea is to combine various types of log data for improving detection capability. OWAD [75] is a general framework to detect, explain, and adapt to normality shifts in practice. OWAD is validated to be effective in various detection granularity, covering provenance graphs, application logs, and network packets. FG-CIBGC [165] mines syncretic semantics in multi-source logs including audit logs, application logs, and network traffic using LLM under in-context learning, which generates behavior graphs for comprehensive analysis." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 584, + 159, + 596 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 584, + 159, + 596 + ], + "spans": [ + { + "bbox": [ + 44, + 584, + 159, + 596 + ], + "type": "text", + "content": "5.3 Attack Investigation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 599, + 440, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 599, + 440, + 659 + ], + "spans": [ + { + "bbox": [ + 44, + 599, + 440, + 659 + ], + "type": "text", + "content": "Except for identifying individual intrusive nodes, IDS are supposed to detect the full story of intrusions (a.k.a., attack scenario graphs). This process is referred to as attack investigation, which can be done by directly detecting attack scenario graphs [216], or analyzing the causal relations between compromised nodes progressively to construct attack scenario graphs [9, 41, 100, 232]. The attack scenario graphs are defined with scenario graphs as follows:" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:16" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 46, + 105, + 439, + 259 + ], + "blocks": [ + { + "bbox": [ + 141, + 84, + 342, + 95 + ], + "lines": [ + { + "bbox": [ + 141, + 84, + 342, + 95 + ], + "spans": [ + { + "bbox": [ + 141, + 84, + 342, + 95 + ], + "type": "text", + "content": "Table 5. Overview of attack investigation approaches." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 46, + 105, + 439, + 259 + ], + "lines": [ + { + "bbox": [ + 46, + 105, + 439, + 259 + ], + "spans": [ + { + "bbox": [ + 46, + 105, + 439, + 259 + ], + "type": "table", + "html": "
TaxonomyApproachRelease TimeAudit LogApplication LogBase ModelStarting NodeInvestigation Granularity
Traditional LearningProvDetector [216]2020doc2vecPath
BehaviorBaseline [269]2025FastTextPath
Sequence Neural NetworkATLAS [9]2021LSTMGraph
LogTracer [166]2022DeepLogPath
ConLBS [118]2023TransformerGraph
AirTag [41]2023BERTGraph
Graph Neural NetworkLiu et al. [134]2022struc2vecGraph
Karios [29]2023GNNGraph
TREC [139]2024GNNGraph
Orthrus [100]2025UniMPPath
Slot [176]2025GNNGraph
FeCoGraph [146]2025GCNGraph
", + "image_path": "56e153ae6819bf89b12e886fd61914b1384a3f85184e624d1b7af714ffa21642.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 275, + 441, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 275, + 441, + 300 + ], + "spans": [ + { + "bbox": [ + 42, + 275, + 441, + 300 + ], + "type": "text", + "content": "Definition 5.2. (Scenario Graph). Scenario graph is a subgraph of its given provenance graph, which is constructed by the nodes and edges causally dependent on nodes of interest." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 306, + 441, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 306, + 441, + 330 + ], + "spans": [ + { + "bbox": [ + 42, + 306, + 441, + 330 + ], + "type": "text", + "content": "Definition 5.3. (Attack Scenario Graph). Attack scenario graph is a scenario graph where its nodes of interest are compromised nodes." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 335, + 442, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 335, + 442, + 432 + ], + "spans": [ + { + "bbox": [ + 42, + 335, + 442, + 432 + ], + "type": "text", + "content": "In the past, attack investigation is conducted by forward analysis and backward analysis [88]. Forward analysis discovers the influence that nodes of interest will cause and backward analysis traces back how nodes of interest are generated. Benefiting from DL techniques, both forward and backward analysis can be achieved by learning patterns of attack scenario graphs. Furthermore, visual analytics techniques have been widely used to assist security analysts in understanding the causal chain of intrusions [256, 261]. Table 5 summarizes the overview of attack investigation approaches. Similar to Section 5.2, we exclude papers [6, 52, 60, 80, 88, 98, 111, 120, 142, 157, 218, 239, 268] slightly relevant to DL for conciseness." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 437, + 442, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 437, + 442, + 521 + ], + "spans": [ + { + "bbox": [ + 42, + 437, + 442, + 521 + ], + "type": "text", + "content": "Traditional Learning. Unlike detecting intrusive nodes, attack scenario graphs are complicated and thus are hard to handle by traditional learning methods. ProvDetector [216] utilizes doc2vec to learn the embedding representation of paths in the provenance graph. Then a density-based detection is deployed to detect abnormal causal paths in the provenance graph. BehaviorBaseline [269] presents a novel learning-based anomaly detection method for large-scale provenance graphs. It incorporates dynamic graph processing with adaptive encoding and a tag-propagation framework for real-time detection." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 527, + 442, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 527, + 442, + 660 + ], + "spans": [ + { + "bbox": [ + 42, + 527, + 442, + 660 + ], + "type": "text", + "content": "Sequence Neural Network. Log data is in the form of natural language text or is allowed to be transformed into sequences of events, which facilitates the introduction of sequence neural networks. ATLAS [9] is a framework to construct end-to-end attack stories from readily available audit logs, which employs a novel combination of causal analysis and natural language processing. ATLAS exploits LSTM to automatically learn the pattern difference between attack and nonattack sequences. LogTracer [166] is an efficient anomaly tracing framework that combines data provenance and system log detection together. An outlier function with an abnormal decay rate is introduced to improve the accuracy. ConLBS [118] combines a contrastive learning framework and multilayer Transformer network for behavior sequence classification. AirTag [41] employs unsupervised learning to train BERT directly from log texts rather than relying on provenance graphs. AirTag constructs attack scenario graphs by integrating the detected victim nodes." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 60, + 441, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 60, + 441, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 60, + 441, + 68 + ], + "type": "text", + "content": "1:17" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 442, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 442, + 290 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 442, + 290 + ], + "type": "text", + "content": "Graph Neural Network. To capture causal relations within graphs, GNN is commonly adopted. Liu et al. [134] propose an automated attack detection and investigation method via learning the context semantics of the provenance graph. The provenance graph analyzed by struc2vec captures temporal and causal dependencies of system events. Kairos [29] is a practical intrusion detection and investigation tool based on whole-system provenance. Kairos utilizes GNN to analyze system execution history, so that detects and reconstructs complex APTs. It employs a GNN-based encoder-decoder architecture to learn the temporal evolution of provenance graph structure changes and quantify the abnormal degree of each system event. TREC [139] abstracts APT attack investigation problem as a tactics / techniques recognition problem. TREC trains its model in a few-shot learning manner by adopting a Siamese neural network. Orthurus [100] identifies Quality of Attribution as the key factor contributing to whether or not the industry adopts IDS. It first detects malicious hosts using a GNN encoder and then reconstructs the attack path through dependency analysis. Slot [176], based on provenance graphs and graph reinforcement learning, uncovers hidden relationships among system behaviors, dynamically adapts to new activities and attack strategies, resists adversarial attacks, and automatically constructs attack chains. FeCoGraph [146] directly processes traffic embedding through line graphs to adapt to various GNNs, covering more attack scenarios while protecting data privacy." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 300, + 178, + 310 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 300, + 178, + 310 + ], + "spans": [ + { + "bbox": [ + 42, + 300, + 178, + 310 + ], + "type": "text", + "content": "6 BENCHMARK DATASETS" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 315, + 441, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 315, + 441, + 339 + ], + "spans": [ + { + "bbox": [ + 42, + 315, + 441, + 339 + ], + "type": "text", + "content": "DL-IDS relies on high-quality data to train an effective model. This section introduces the dimensions of datasets (Section 6.1) and some public datasets widely used in DL-IDS (Section 6.2)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 349, + 175, + 360 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 349, + 175, + 360 + ], + "spans": [ + { + "bbox": [ + 42, + 349, + 175, + 360 + ], + "type": "text", + "content": "6.1 Dimensions of Datasets" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 365, + 408, + 377 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 365, + 408, + 377 + ], + "spans": [ + { + "bbox": [ + 42, + 365, + 408, + 377 + ], + "type": "text", + "content": "To illustrate the quality of DL-IDS datasets, it is general to use the following dimensions:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 58, + 381, + 440, + 547 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 58, + 381, + 440, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 381, + 440, + 415 + ], + "spans": [ + { + "bbox": [ + 58, + 381, + 440, + 415 + ], + "type": "text", + "content": "- Benign Scenarios: Benign data should cover benign behaviors and system activities to the greatest extent, enabling DL-IDS to learn patterns of benign behaviors to differentiate malicious behaviors." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 58, + 417, + 440, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 417, + 440, + 452 + ], + "spans": [ + { + "bbox": [ + 58, + 417, + 440, + 452 + ], + "type": "text", + "content": "- Malicious Scenarios: Malicious data ought to incorporate typical attack scenarios while taking into account the diversity of attacks, including short-term and long-term attacks, as well as simple attacks and multi-stage attacks." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 58, + 452, + 440, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 452, + 440, + 475 + ], + "spans": [ + { + "bbox": [ + 58, + 452, + 440, + 475 + ], + "type": "text", + "content": "- Ground-truth Labels: Data should be labeled as benign or malicious. For multi-stage attacks, it is useful to indicate the attack type or the attack stage it belongs to." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 58, + 477, + 440, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 477, + 440, + 511 + ], + "spans": [ + { + "bbox": [ + 58, + 477, + 440, + 511 + ], + "type": "text", + "content": "- Data Granularities: Datasets can be in the form of different granularities. The most accepted one is to provide raw log data. Due to copyright concerns, some replicates [41, 99] merely provide post-processed log data without their processing source codes." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 58, + 512, + 440, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 512, + 440, + 547 + ], + "spans": [ + { + "bbox": [ + 58, + 512, + 440, + 547 + ], + "type": "text", + "content": "- Operating Systems: The operating system determines the generalizability of the dataset. The more operating systems a dataset covers and the more common they are, the more comprehensively it can evaluate PIDS performance." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 42, + 560, + 138, + 570 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 560, + 138, + 570 + ], + "spans": [ + { + "bbox": [ + 42, + 560, + 138, + 570 + ], + "type": "text", + "content": "6.2 Public Datasets" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 42, + 574, + 441, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 574, + 441, + 621 + ], + "spans": [ + { + "bbox": [ + 42, + 574, + 441, + 621 + ], + "type": "text", + "content": "Publicly available datasets bring a lot of convenience to research on DL-IDS. However, some researchers use self-made datasets that are not publicly available, making it difficult for other researchers to reuse their datasets [46]. To address this issue, we collect and organize some open-source datasets for further studies, which are listed in Table 6." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 42, + 622, + 441, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 622, + 441, + 658 + ], + "spans": [ + { + "bbox": [ + 42, + 622, + 441, + 658 + ], + "type": "text", + "content": "LANL Dataset [103] is collected within the internal computer network of Los Alamos National Laboratory's corporate. The dataset consists of 58 consecutive days of de-identified data, covering about 165 million events from 12 thousand users. To obtain, its data sources include Windows-based" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 76, + 116, + 408, + 279 + ], + "blocks": [ + { + "bbox": [ + 44, + 84, + 441, + 105 + ], + "lines": [ + { + "bbox": [ + 44, + 84, + 441, + 105 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 441, + 105 + ], + "type": "text", + "content": "Table 6. Overview of public datasets. W, L, F, A, M, and S represent the operating system of Windows, Linux, FreeBSD, Android, Mac, and supercomputer, respectively." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 76, + 116, + 408, + 279 + ], + "lines": [ + { + "bbox": [ + 76, + 116, + 408, + 279 + ], + "spans": [ + { + "bbox": [ + 76, + 116, + 408, + 279 + ], + "type": "table", + "html": "
DatasetReleaseSizeScenariosLabelFormatSystem
LANL Dataset [103]201512 GB-Yes.txtW
StreamSpot [145]20162 GB1Yes.tsvL
AWSCTD [22]201839 GB-NoSQLiteW
DARPA TC E3 [38]2018366 GB [67]6NoCDMW, L, F, A
DARPA TC E5 [38]20192,699 GB [67]8NoCDMW, L, F, A
DARPA OpTC [37]20201,100 GB [13]-NoeCARW
Unicorn SC [76]2020147 GB2YesCDML
CERT Dataset [63, 131]202087 GB-Yes.csvW
LogChunks [20]202024.1 MB-Yes.txt-
Loghub [266]202077 GB--.txtW, L, M, S
ATLAS [9]20210.5 GB10Yes.txtW
ATLASv2 [184]20231210Yes.txtW
ProvSec [197]2023-11Yes.jsonL
AutoLabel [173]2025136 GB29Yes.jsonL
", + "image_path": "fe18d2ce14f1f4df61f7c7755a7441ee7d26f7ed02d5dc381f022683391478c8.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 311, + 441, + 334 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 311, + 441, + 334 + ], + "spans": [ + { + "bbox": [ + 44, + 311, + 441, + 334 + ], + "type": "text", + "content": "authentication events, process start and stop events, DNS lookups, network flows, and a set of well-defined red teaming events." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 334, + 441, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 334, + 441, + 394 + ], + "spans": [ + { + "bbox": [ + 44, + 334, + 441, + 394 + ], + "type": "text", + "content": "StreamSpot dataset [145] is made up of 1 attack and 5 benign scenarios. The attack scenario exploits a Flash vulnerability and gains root access to the visiting host by visiting a malicious drive-by download URL. The benign scenarios are relevant to normal browsing activity, specifically watching YouTube, browsing news pages, checking Gmail, downloading files, and playing a video game. All the scenarios are simulated through 100 automated tasks with the Selenium RC [208]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 394, + 441, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 394, + 441, + 465 + ], + "spans": [ + { + "bbox": [ + 44, + 394, + 441, + 465 + ], + "type": "text", + "content": "DARPA TC datasets [38] are sourced from the DARPA Transparent Computing (TC) program, identified by the number of engagements from E1 to E5. Among them, DARPA TC E3 is the most widely used. The TC program aims to make current computing systems transparent by providing high-fidelity visibility during system operations across all layers of software abstraction. Unfortunately, DARPA TC datasets are released without labels, and DARPA makes no warranties as to the correctness, accuracy, or usefulness of the datasets." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 465, + 441, + 549 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 465, + 441, + 549 + ], + "spans": [ + { + "bbox": [ + 44, + 465, + 441, + 549 + ], + "type": "text", + "content": "DARPA Operationally Transparent Cyber (OpTC) [37] is a technology transition pilot study funded under Boston Fusion Corporate. The OpTC system architecture is based on the one used in TC program evaluation. In OpTC, every Windows 10 endpoint is equipped with an endpoint sensor that monitors post events, packs them into JSON records, and sends them to Kafka. A translation server aggregates the data into eCAR format and pushes them back to Kafka. OpTC scales TC components from 2 to 1,000 hosts. The dataset consists of approximately 1 TB of compressed JSON data in a highly instrumented environment over two weeks." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 549, + 441, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 549, + 441, + 621 + ], + "spans": [ + { + "bbox": [ + 44, + 549, + 441, + 621 + ], + "type": "text", + "content": "Unicorn SC [76] is a dataset specifically designed for APT detection, proposed by Han et al., authors of the Unicorn model. The dataset includes two supply chain scenarios, wget and shell shock, where each scenario lasts for 3 days to simulate the long-term feature of APT attacks, resulting in provenance data containing 125 benign behaviors and 25 malicious behaviors. The data is saved in the form of provenance graphs, describing the causal relationships during the system execution process." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 621, + 441, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 621, + 441, + 656 + ], + "spans": [ + { + "bbox": [ + 44, + 621, + 441, + 656 + ], + "type": "text", + "content": "CERT Dataset [131] is a collection of synthetic insider threat test datasets that provide both background and malicious actor synthetic data. It is developed by the CERT Division, in collaboration with ExactData, LLC, and under sponsorship from DARPA I2O. CERT dataset learned" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 427, + 61, + 440, + 67 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 427, + 61, + 440, + 67 + ], + "spans": [ + { + "bbox": [ + 427, + 61, + 440, + 67 + ], + "type": "text", + "content": "1:19" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 235, + 673, + 439, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 235, + 673, + 439, + 681 + ], + "spans": [ + { + "bbox": [ + 235, + 673, + 439, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 84, + 439, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 84, + 439, + 108 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 439, + 108 + ], + "type": "text", + "content": "important lessons about the benefits and limitations of synthetic data in the cybersecurity domain and carefully discussed models of realism for synthetic data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 109, + 440, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 109, + 440, + 168 + ], + "spans": [ + { + "bbox": [ + 44, + 109, + 440, + 168 + ], + "type": "text", + "content": "LogChunks [20] is an application log dataset for build log analysis, containing 797 annotated Travis CI build logs from 80 GitHub repositories and 29 programming languages. These logs are from mature and popular projects, collected through repository, build, and log sampling. Each log in the dataset has manually labeled text blocks of build failure reasons, search keywords, and structural categories, and cross-validated with the original developers with an accuracy of " + }, + { + "bbox": [ + 44, + 109, + 440, + 168 + ], + "type": "inline_equation", + "content": "94.4\\%" + }, + { + "bbox": [ + 44, + 109, + 440, + 168 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "spans": [ + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "text", + "content": "Loghub dataset [266] is a large collection of system log datasets, providing 19 real-world log data from various software systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. The objective of Loghub is to fill the significant gap between intelligent automated log analysis techniques and successful deployments in the industry. For the usage scenarios of Loghub, about " + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "inline_equation", + "content": "35\\%" + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "text", + "content": " are anomaly detection, " + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "inline_equation", + "content": "13\\%" + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "text", + "content": " are log analysis, and " + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "inline_equation", + "content": "8\\%" + }, + { + "bbox": [ + 44, + 169, + 440, + 240 + ], + "type": "text", + "content": " are security." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 241, + 440, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 241, + 440, + 300 + ], + "spans": [ + { + "bbox": [ + 44, + 241, + 440, + 300 + ], + "type": "text", + "content": "ATLAS dataset [9] implements 10 attacks based on their detailed reports on real-world APT campaigns and generates audit logs in a controlled testbed environment. Among the ten attacks, four are from single host and the rest six are from multiple hosts. All attacks were developed and executed on Windows 7 32-bit virtual machines and took an hour to complete, along with a 24-hour-window audit logs for benign system behaviors." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 300, + 440, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 300, + 440, + 372 + ], + "spans": [ + { + "bbox": [ + 44, + 300, + 440, + 372 + ], + "type": "text", + "content": "ATLASv2 dataset [184] enriches the ATLAS dataset with higher quality background noise and additional logging vantage points. In this dataset, two researchers use the victim machines as their primary work stations throughout the course of engagement, instead of depending on automated scripts to generate activity. System logging, in contrast, cover a five-day period, where the first four days simulate normal work days and the fifth day begins with benign activity then transitions into execution of the corresponding attack." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 372, + 440, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 372, + 440, + 420 + ], + "spans": [ + { + "bbox": [ + 44, + 372, + 440, + 420 + ], + "type": "text", + "content": "ProvSec dataset [197] is created for system provenance forensic analysis. To fulfill data provenance requirements, ProvSec includes the full details of system calls including system parameters. In ProvSec, 11 realistic attack scenarios with real software vulnerabilities and exploits are used and an algorithm to improve the data quality in the system provenance forensics analysis is presented." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 420, + 440, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 420, + 440, + 468 + ], + "spans": [ + { + "bbox": [ + 44, + 420, + 440, + 468 + ], + "type": "text", + "content": "AutoLabel dataset [173] automates fine-grained log labeling by reducing the labeling problem to obtaining an accurate attack subgraph in a provenance graph. Its experiments consist of 29 scenarios, including 25 real CVE vulnerabilities across 12 widely-used applications (spanning 5 programming languages) plus a Sandworm threat simulation by MITRE CTID." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 478, + 256, + 489 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 478, + 256, + 489 + ], + "spans": [ + { + "bbox": [ + 44, + 478, + 256, + 489 + ], + "type": "text", + "content": "7 CHALLENGES AND FUTURE DIRECTIONS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 493, + 440, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 493, + 440, + 541 + ], + "spans": [ + { + "bbox": [ + 44, + 493, + 440, + 541 + ], + "type": "text", + "content": "After the detailed introduction to the data management stage and the intrusion detection stage, as well as the widely-used benchmark datasets, this section further discusses challenges encountered in existing DL-IDS and summarizes the corresponding visions. These include fundamental resources (Section 7.1), pre-trained large models (Section 7.2), and comprehensive applications (Section 7.3)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 552, + 176, + 563 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 552, + 176, + 563 + ], + "spans": [ + { + "bbox": [ + 44, + 552, + 176, + 563 + ], + "type": "text", + "content": "7.1 Fundamental Resources" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 567, + 440, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 567, + 440, + 590 + ], + "spans": [ + { + "bbox": [ + 44, + 567, + 440, + 590 + ], + "type": "text", + "content": "Effective DL-IDS heavily depends on core fundamental resources such as datasets and computing facilities to develop [105]. Here, we will discuss their challenges one after the other." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 599, + 440, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 599, + 440, + 658 + ], + "spans": [ + { + "bbox": [ + 44, + 599, + 440, + 658 + ], + "type": "text", + "content": "7.1.1 Poor Data Quality. Existing datasets for DL-IDS may contain errors, inaccuracies, or missing values. This leads to unreliable descriptions of system behaviors that may mislead DL-IDS. For example, in some cases of the DARPA TC dataset, the PROCESS object and its source fail to properly resolve conflicts, resulting in possible incorrect transformation. Besides, the acuity_level value of the FLOW object is 0, while the value range for this field in other objects is from 1 to 5. Another" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:20" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 44, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 84, + 441, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 84, + 441, + 144 + ], + "spans": [ + { + "bbox": [ + 42, + 84, + 441, + 144 + ], + "type": "text", + "content": "example could be the LogChunks [20] dataset. In this dataset, the content describing the failure reasons is possibly incomplete. This is because a chunk in LogChunks only contains a continuous substring of the log text and a failure reason may be described across multiple sections of the log. Moreover, LogChunks neglects the classification of failure reasons like test, compilation, and code inspection errors, which hinders further research from analyzing failure reasons." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 145, + 442, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 145, + 442, + 300 + ], + "spans": [ + { + "bbox": [ + 42, + 145, + 442, + 300 + ], + "type": "text", + "content": "Meanwhile, high-quality ground-truth labels are hard to acquire, which is impeded by the contradiction between fine-grained manual labeling and automated label generation. On one hand, for unknown intrusions such as zero-day attacks, it is very labor-intensive for security analysts to correspond each attack scenario to certain log entries, although coarse-grained attack scenarios may have been acquired. The DAPRA TC dataset [38] is a typical example for this. It only provides a ground truth report for attack scenarios, which does not correspond to any specific log entries. Although a few researchers [219] provide the third-party ground-truth labels that are manually identified by themselves, we empirically find some ambiguities between their ground-truth labels and the official attack scenario report. These ambiguities have an obviously negative effect on DL-IDS, and to some extent, they may even cause the accumulation of errors. On the other hand, the development of automated labeling tools is in an awkward position. The log data is generated based on its given prior knowledge of intrusions [28], whereas the challenge of DL-IDS is to detect zero-day intrusions. This tends the development of such automated tools to be somewhat pointless." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "spans": [ + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "text", + "content": "In addition, there are no unified and effective evaluation metrics for DL-IDS [29], which further weakens the potential of datasets. For example, precision, recall, F1 score are usually exploited in most studies [9, 99, 182, 216], while some papers [41] propose to use True Positive Rate (TPR) and False Positive Rate (FPR) as evaluation metrics. This makes the comparison experiments usually unfair and hard to tell if the validation is convincing. We also note that in many cases where the percentage of negatives (or malicious log entries) is low, sacrificing FPR can always significantly increase TPR. For example, sacrificing 1,000 false positives for one true positive might only increase FPR by " + }, + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "inline_equation", + "content": "0.05\\%" + }, + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "text", + "content": ", but would increase TPR by " + }, + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "inline_equation", + "content": "5\\%" + }, + { + "bbox": [ + 42, + 300, + 442, + 397 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 406, + 441, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 406, + 441, + 441 + ], + "spans": [ + { + "bbox": [ + 42, + 406, + 441, + 441 + ], + "type": "text", + "content": "7.1.2 Insufficient Amount of Data. Although log data is generated very quickly (e.g., eBay generates 1.2 PB log data per day by 2018 [189]), DL-IDS is still facing challenges in insufficient amounts of data. Discounting the above data quality issues such as inaccuracies, the reasons are three-fold:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 442, + 442, + 549 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 442, + 442, + 549 + ], + "spans": [ + { + "bbox": [ + 42, + 442, + 442, + 549 + ], + "type": "text", + "content": "First, log data has an extremely large number of trivial events, which are proven ineffective and usually removed by graph summarization [237, 250]. For example, data provenance provides fine-grained information about memory-related events, such as data-to-memory mapping and protection of certain memory addresses. These memory-related events basically do not involve attacks, and unfortunately, are always orthogonal to the existing DL-IDS. However, to ensure the completeness requirement of data provenance and to capture very infrequent but inevitable memory attacks, these memory-related events are still recorded in benchmark datasets. As a result, the usable part of each dataset is rather small for DL-IDS, which can be reflected by the high summarization ratio achieved by graph summarization approaches (e.g., " + }, + { + "bbox": [ + 42, + 442, + 442, + 549 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 42, + 442, + 442, + 549 + ], + "type": "text", + "content": " [234])." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 550, + 442, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 550, + 442, + 657 + ], + "spans": [ + { + "bbox": [ + 42, + 550, + 442, + 657 + ], + "type": "text", + "content": "The second reason for an insufficient amount of data is the limited dataset representativeness. As observed in Table 6, most of the datasets have no more than 10 attack scenarios, not to mention that each of these attack scenarios has been carefully chosen by their authors. This limited number of attack scenarios suggests that existing datasets are almost impossible to represent the diversified attack methods, as the number of CVE records has already been over 280,000 [31]. Furthermore, the existing datasets such as DAPRA TC E3 [38] are collected in a specific experimental environment and may not cover other types of normal system behaviors, and are proven that a significant amount of synthetic data exists [133]. DARPA TC E5 [38] is unusable for most experiments due to the sparse and error-filled documentation. Unicorn SC [76] is generated by an idealized simulation" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "type": "text", + "content": "1:21" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 440, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 440, + 120 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 440, + 120 + ], + "type": "text", + "content": "of supply chain scenarios, which means many real-world features are prone to be ignored in this dataset. Hence, training DL-IDS on these non-representative datasets could be a disaster for the computer systems that they protect." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 122, + 441, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 122, + 441, + 193 + ], + "spans": [ + { + "bbox": [ + 42, + 122, + 441, + 193 + ], + "type": "text", + "content": "Finally, the accessibility of datasets further exacerbates the insufficient data problem. Due to privacy and copyright issues, some datasets may be proprietary or difficult to obtain [216, 218]. Moreover, ProvDetector [216] conducted a three-month system evaluation in an enterprise environment with 306 hosts and collected benign provenance data of 23 target programs. Yet this dataset has not been made public, rendering it unavailable to improve other DL-IDS and almost all the assessment settings related to ProvDetector are susceptible to inequity." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 199, + 443, + 410 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 42, + 199, + 442, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 199, + 442, + 331 + ], + "spans": [ + { + "bbox": [ + 42, + 199, + 442, + 331 + ], + "type": "text", + "content": "7.1.3 Potential Heavy Computation Requirements. Similar to other DL techniques, DL-IDS also requires a potentially large amount of computing resources to improve their performance. According to [185], the generalizability of neural models is proportional to the investment of computing resources. Supposing that the challenge of insufficient data is mitigated and a large volume of log data is available, more computing resources are inevitably required. Besides, we will illustrate in Section 7.2 that there are plenty of powerful techniques that have not been introduced in DL-IDS, which will also bring in computation requirements. Unfortunately, acceleration methods like parallel computation and efficient retrieval have not been fully scheduled by the cybersecurity community. An example is that the computation time of Unicorn equipped with one core is proven linear to its workloads [76]. It is clear that the efficiency of Unicorn, which is not implemented in parallel, will reach the bottleneck as this core does." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 337, + 443, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 337, + 443, + 410 + ], + "spans": [ + { + "bbox": [ + 42, + 337, + 443, + 410 + ], + "type": "text", + "content": "7.1.4 Future Directions. To conclude, the challenges for DL-IDS in fundamental resources consist of data quality, data volume, and computational overhead. Apart from unintentional errors and nontechnical issues in fundamental resources, the research questions that urgently need to be addressed include the contradiction between unaffordable manual labeling and non-generalizable auto-labeling techniques, non-unified benchmark datasets and evaluation metrics, as well as potential heavy computational overheads. Therefore, we summarize the future directions as follows:" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 77, + 418, + 158, + 428 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 418, + 158, + 428 + ], + "spans": [ + { + "bbox": [ + 77, + 418, + 158, + 428 + ], + "type": "text", + "content": "Future Directions" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 77, + 436, + 407, + 519 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 77, + 436, + 407, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 436, + 407, + 470 + ], + "spans": [ + { + "bbox": [ + 77, + 436, + 407, + 470 + ], + "type": "text", + "content": "- Developing efficient man-machine interactive log labeling mechanisms and organizing open-source data-sharing platforms accordingly to provide large amounts of high-quality datasets." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 77, + 471, + 407, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 471, + 407, + 494 + ], + "spans": [ + { + "bbox": [ + 77, + 471, + 407, + 494 + ], + "type": "text", + "content": "- Maintaining effective and comprehensive benchmark datasets, accompanied by a unified performance metric framework for a fair comparison." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 77, + 496, + 407, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 496, + 407, + 519 + ], + "spans": [ + { + "bbox": [ + 77, + 496, + 407, + 519 + ], + "type": "text", + "content": "- Investigating parallel or simplified strategies for DL-IDS, and studying their integration with log storage systems to achieve end-to-end acceleration." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 43, + 537, + 240, + 549 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 537, + 240, + 549 + ], + "spans": [ + { + "bbox": [ + 43, + 537, + 240, + 549 + ], + "type": "text", + "content": "7.2 Pre-training Theories and Techniques" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 551, + 441, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 551, + 441, + 599 + ], + "spans": [ + { + "bbox": [ + 42, + 551, + 441, + 599 + ], + "type": "text", + "content": "In recent years, significant progress has been made by Large Language Models (LLMs) in the field of DL. Their capacity to understand and generate dialogue has been greatly enhanced as the model parameters of LLMs keep rising. T5 [179], BERT [39], GPT [178], GPT-4 [2], LaMDA [207], and LLaMA [209] are notable examples." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 42, + 599, + 441, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 599, + 441, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 599, + 441, + 659 + ], + "type": "text", + "content": "With the development of pre-training techniques, LLMs have been adopted in many fields such as finance [258], education [164], medicine [172], and even other domains of cybersecurity [34, 69, 92]. In contrast, the adoption of LLMs in DL-IDS is stagnant, as shown in Figure 6. We can observe that LLMs developed at full speed beginning in 2019. Their prosperity, however, has not extended to DL-IDS. Until now, the only two DL-IDS that incorporate pre-training techniques, AirTag [41] and" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:22" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 49, + 90, + 439, + 251 + ], + "blocks": [ + { + "bbox": [ + 49, + 90, + 439, + 251 + ], + "lines": [ + { + "bbox": [ + 49, + 90, + 439, + 251 + ], + "spans": [ + { + "bbox": [ + 49, + 90, + 439, + 251 + ], + "type": "image", + "image_path": "2b523136b335e2c501d72edce3212459da5c2cf2b38df4681b670950b0f1a8f2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 42, + 272, + 441, + 305 + ], + "lines": [ + { + "bbox": [ + 42, + 272, + 441, + 305 + ], + "spans": [ + { + "bbox": [ + 42, + 272, + 441, + 305 + ], + "type": "text", + "content": "Fig. 6. Interactions between DL models and DL-IDS. While DL models proposed before 2019 have already leveraged in DL-IDS, the emerging LLMs (or pre-training theories and the techniques) since 2020 remains underdeveloped in this domain." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 323, + 441, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 323, + 441, + 441 + ], + "spans": [ + { + "bbox": [ + 42, + 323, + 441, + 441 + ], + "type": "text", + "content": "MAGIC [99], still do not make full use of the potential of LLMs. AirTag pre-trains a BERT model on application logs and detects intrusions in terms of embeddings generated by BERT. MAGIC introduces GraphMAE [90], a model architecture derived from Graph Autoencoder [109] in 2016 but integrated with the famous masked self-supervised learning method [81] in 2022, to conduct self-supervised learning on provenance graphs. MAGIC further designs an adapter to apply the pre-trained model in different detection scenarios. Nevertheless, both AirTag and MAGIC can be regarded as preliminary explorations of pre-training techniques. According to the scaling law [102], the performance of LLMs will steadily improve, as the parameters, data, and computation increase. And the reasoning ability of LLMs will suddenly emerge [220], allowing them to chat with humans smoothly. Such advantageous abilities obviously have not been incorporated into DL-IDS." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 443, + 442, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 443, + 442, + 478 + ], + "spans": [ + { + "bbox": [ + 42, + 443, + 442, + 478 + ], + "type": "text", + "content": "Nowadays, some researchers [7, 59, 125, 160] have started to explore the applications of LLMs on DL-IDS. Yet the theories and techniques of such combination remain challenges. In the following, we will illustrate the identified issues and then point out the future directions." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 485, + 443, + 659 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 42, + 485, + 442, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 485, + 442, + 604 + ], + "spans": [ + { + "bbox": [ + 42, + 485, + 442, + 604 + ], + "type": "text", + "content": "7.2.1 Trade-off between Reliability and Generalizability. The governing concern for the employment of LLMs in DL-IDS is reliability (or explainability). Although offering generalizability, LLMs have long been denounced to have issues with hallucinations [149, 241], privacy [84, 240, 244], overreliance [107], and backdoor threats [136]. These unexplainable and uncontrollable features are an absolute disaster for DL-IDS. For example, when feeding log data to LLMs, they sometimes are prone to hallucinate and provide wrong detection results. Attacks thus successfully bypass the detection facilities and can exfiltrate sensitive data in the victim computer systems. Another example for this is that sensitive information may leak from LLMs. Hui et al. [93] present a prompt leakage attack for LLMs, which is demonstrated to be effective in both offline settings and real-world LLM applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "type": "text", + "content": "7.2.2 Short of Statistical Log Modeling. LLMs are developed on the basis of statistical language modeling [101, 187], which is not insufficiently studied for log data. The statistical modeling of natural language can be traced back to the early 1950s when Shannon pioneered the technique of predicting the next element of natural language text [195] and discussed the n-gram model for" + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 441, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 441, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 441, + 68 + ], + "type": "text", + "content": "1:23" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 52, + 116, + 431, + 185 + ], + "blocks": [ + { + "bbox": [ + 44, + 84, + 441, + 106 + ], + "lines": [ + { + "bbox": [ + 44, + 84, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 44, + 84, + 441, + 106 + ], + "type": "text", + "content": "Table 7. Comparison of research advances in statistical modeling of various data. \"NL\", \"PL\" and \"FL\" represent Natural Language, Programming Language, and Formal Language, respectively. Note that PL is a type of FL." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 52, + 116, + 431, + 185 + ], + "lines": [ + { + "bbox": [ + 52, + 116, + 431, + 185 + ], + "spans": [ + { + "bbox": [ + 52, + 116, + 431, + 185 + ], + "type": "table", + "html": "
DataFormContent Generation RulesStatistical Modeling StudiesPre-training
TextNLGrammar, pragmatics, semantics, etc[101, 148, 187, 196]well-done
SpeechNLText rules (see above) and phonetics[104, 167]well-done
Source codePLLexical and syntactic definitions[8, 85, 180]well-done
LogNL + FLLog template defined by developersfuture workunderdeveloped
", + "image_path": "56aa40c700210b6d12b351836313889a9aa1ee9637de6412e6be03e25f4a6f0e.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 225, + 441, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 225, + 441, + 320 + ], + "spans": [ + { + "bbox": [ + 44, + 225, + 441, + 320 + ], + "type": "text", + "content": "English [196]. After that, as machine learning came into view of the NLP research communities, language modeling flourished, and many models such as TreeBank [148], word2vec [154, 155] and LSTM [86] were proposed. Over decades, researchers in NLP have gained solid knowledge of language modeling, whose interests gradually shifted to efficiency. An epoch-making model, Transformer [212], was presented using the multi-head self-attention mechanism to fulfill parallel computing, which was widely exploited in popular pre-trained models such as BERT [39] and GPT [2] afterward. It is evident that the success of LLMs comes from the prolonged studies on statistical language modeling." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 321, + 441, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 321, + 441, + 428 + ], + "spans": [ + { + "bbox": [ + 44, + 321, + 441, + 428 + ], + "type": "text", + "content": "Unfortunately, there are almost no research efforts on statistical modeling of log data, resulting in pre-training techniques of DL-IDS remaining underdeveloped. By contrast, as illustrated in Table 7, the statistical modeling studies of other types of data have already started. Hindle et al. [85] demonstrate that the source code is very repetitive and predictable, and, in fact, even more so than natural language. Driven by such statistical modeling conclusion, DL-based source code applications [54, 70, 124, 126, 203, 233, 235] such as code generation and code clone detection flourish, many of which have already becomes common applications in LLMs. Similar cases can be found for speech data, whose applications are like text to speech [71, 169, 183] and speech recognition [14]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 429, + 441, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 429, + 441, + 571 + ], + "spans": [ + { + "bbox": [ + 44, + 429, + 441, + 571 + ], + "type": "text", + "content": "We argue that log data is also created by humans, similar to text, speech, and source code. It is generated according to developer-defined log templates, with a form of both natural language (e.g., application logs) and formal language (e.g., data provenance in CDM format). Given the fact that natural language (e.g., text and speech) and formal language (e.g., source code) both exhibit positive performance in pre-training, log data urgently demands statistical modeling achievements to facilitate its pre-training research. Although several works [96, 152] have discussed the features of log data, they are orthogonal to the explainable combination of DL and IDS. Compared with the other data types, challenges in statistical log modeling, for instance, may lie in that logs are extremely long and detailed for reliable purposes. It is very common that the length of one single log entry is the same as that of one paragraph in natural language texts. These challenges happen to be the shortcomings of LLMs - not being able to handle long text and not being trustworthy in generated contents." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 583, + 441, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 583, + 441, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 583, + 441, + 655 + ], + "type": "text", + "content": "7.2.3 Future Directions. According to the scaling laws [102] and emergent abilities theory [220], as the model size continues to grow, the performance of DL-IDS will increase simultaneously. Thus, increasing the amount of model parameters will be an inevitable trend for DL-IDS. The underlying research questions include the strategies for incorporating existing LLMs in intrusion detection, since it is infeasible to directly leverage unreliable LLMs to detect intrusions, and the theories and techniques for modeling long and detailed log data. We summarize the future directions as follows:" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:24" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 44, + 673, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 77, + 86, + 158, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 86, + 158, + 95 + ], + "spans": [ + { + "bbox": [ + 77, + 86, + 158, + 95 + ], + "type": "text", + "content": "Future Directions" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 77, + 103, + 409, + 176 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 77, + 103, + 408, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 103, + 408, + 138 + ], + "spans": [ + { + "bbox": [ + 77, + 103, + 408, + 138 + ], + "type": "text", + "content": "- Investigating how and where to introduce LLMs into DL-IDS like [165], with the objective of balancing the generalizability provided by LLMs and the reliability required by DL-IDS." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 77, + 139, + 409, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 139, + 409, + 176 + ], + "spans": [ + { + "bbox": [ + 77, + 139, + 409, + 176 + ], + "type": "text", + "content": "- Exploring fundamental statistical modeling theories for log data. On this basis, designing pre-training frameworks for log data and its downstream tasks such as steps within the workflow of DL-IDS (see Section 3.2)." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 43, + 191, + 266, + 203 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 191, + 266, + 203 + ], + "spans": [ + { + "bbox": [ + 43, + 191, + 266, + 203 + ], + "type": "text", + "content": "7.3 Comprehensive Applications and Scenarios" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 42, + 206, + 442, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 206, + 442, + 242 + ], + "spans": [ + { + "bbox": [ + 42, + 206, + 442, + 242 + ], + "type": "text", + "content": "DL-IDS possess abilities that the traditional IDS lack, or are difficult to realize, such as generalizability for zero-day attacks and modeling ability for complicated downstream tasks. We will elaborate on the possible new-style applications and discuss the challenges in and introduced by them." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 42, + 248, + 440, + 296 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 248, + 440, + 296 + ], + "spans": [ + { + "bbox": [ + 42, + 248, + 440, + 296 + ], + "type": "text", + "content": "7.3.1 Limited Forward and Backward Tracing Scope. Forward tracing and backward tracing are employed in attack investigation, as illustrated in Section 5.3. Under traditional settings, the forward tracing analyzes the influence a symptom node would have on the victim computer system, and the backward tracing discovers the starting node where the vulnerabilities exist [270]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 42, + 296, + 442, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 296, + 442, + 367 + ], + "spans": [ + { + "bbox": [ + 42, + 296, + 442, + 367 + ], + "type": "text", + "content": "We argue that the existing tracing scope is too limited to handle increasingly complicated intrusions and DL-IDS can be defined more broadly. In addition to investigating scenario graphs of intrusions, DL-IDS are supposed to further investigate why these intrusions occur and how to hold back them. The broader definition introduces more downstream tasks that would be difficult to accomplish without the assistance of DL techniques. Based on Definition 3.3, we reformulate the definition of intrusion in a broad sense for DL-IDS as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 42, + 373, + 442, + 409 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 373, + 442, + 409 + ], + "spans": [ + { + "bbox": [ + 42, + 373, + 442, + 409 + ], + "type": "text", + "content": "Definition 7.1. (Generalized Intrusion). Generalized intrusion is the malicious attempts against a computer, a network, or the corresponding security facilities, whose attributes encompass not only itself but also its underlying root causes and the relevant control measures." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 42, + 413, + 441, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 413, + 441, + 497 + ], + "spans": [ + { + "bbox": [ + 42, + 413, + 441, + 497 + ], + "type": "text", + "content": "In this way, the detection of DL-IDS has been extended to the broadly defined intrusions, including their attributes of both root causes and control measures. When executing backward tracing analysis, DL-IDS are not only required to detect the starting symptom nodes of intrusions, but also required to find the root causes of these symptom nodes (i.e., vulnerabilities in source codes). In the forward tracing analysis, except for detecting the symptom nodes affected by intrusions, DL-IDS should perform an in-depth analysis to discover the potentially compromised nodes and provide control measures for handling intrusions." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 42, + 498, + 442, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 498, + 442, + 606 + ], + "spans": [ + { + "bbox": [ + 42, + 498, + 442, + 606 + ], + "type": "text", + "content": "Thankfully, several pioneering works have studied similar problems [25, 144]. In AiVl [25], algorithms to bridge log entries and program models are developed using dynamic-static program analysis. Root causes for the exploited vulnerabilities are capable of directly deriving from intrusion detection. Pedro et al. [144] investigate detection and mitigation methods for DDoS attacks, aiming to control them immediately. Additionally, semi-automated adaptive network defense (SAND) [26] leverages SDN to dynamically generate and deploy defense rules. We note that these research attempts are all based on heuristics, either using pre-defined rules to generate root causes, or developing control measures for specific intrusions. Thus, there is a substantial need to introduce advanced DL techniques to this problem." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "spans": [ + { + "bbox": [ + 42, + 611, + 443, + 659 + ], + "type": "text", + "content": "7.3.2 Concerns about Data-driven Adversarial Attacks. To validate the detection performance, DL-IDS commonly idealize the experimental data in their threat model. Such idealization, however, leaves DL-IDS with weaknesses that could be exploited by invaders. For example, a common assumption is that no attacks are considered to compromise the security of the log collection" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 59, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 441, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 441, + 69 + ], + "type": "text", + "content": "1:25" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 441, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 84, + 440, + 132 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 84, + 440, + 132 + ], + "spans": [ + { + "bbox": [ + 42, + 84, + 440, + 132 + ], + "type": "text", + "content": "systems [76, 79, 99, 182], namely log data utilized in DL-IDS is absolutely harmless. But as attacks become more stealthy and complicated, it is impossible to satisfy such an assumption apparently. When DL-IDS encounter intentional data poisoning attacks, prediction backdoors could be easily planted as persistent vulnerabilities." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 42, + 133, + 441, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 133, + 441, + 228 + ], + "spans": [ + { + "bbox": [ + 42, + 133, + 441, + 228 + ], + "type": "text", + "content": "The robustness of DL-IDS is also challenged by data-driven evasion attacks. To evade the detection, the malicious behaviors usually mimic the benign ones (a.k.a., mimicry attacks), making them hard to be detected. By early 2002, David et al. [215] have indicated the danger of mimicry attacks on HIDS. Recently, researchers have started to investigate mimicry attacks on DL-IDS [64, 132, 161] and their studies all present effective evasion of detection. From a study [24], DL-IDS can be even plagued by a trivial perturbation in log data. Aware of this issue, R-caid [65] proposes to embed root causes into the detection model for countering adversarial attacks. However, as noted in recent work [64, 65, 161], data-driven attacks still remain a major challenge for DL-IDS." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 235, + 441, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 235, + 441, + 270 + ], + "spans": [ + { + "bbox": [ + 42, + 235, + 441, + 270 + ], + "type": "text", + "content": "7.3.3 Underexplored Promising Scenarios. While DL-IDS show excellent performance in the protection of computer and network systems recently, there are still many promising scenarios for DL-IDS that have not been explored sufficiently." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 42, + 272, + 442, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 272, + 442, + 390 + ], + "spans": [ + { + "bbox": [ + 42, + 272, + 442, + 390 + ], + "type": "text", + "content": "Mobile edge computing (MEC) [1, 117, 147] is a typical scenario. In the MEC environment, mobile computing, network control, and storage are pushed at the network edges so as to enable computation-intensive tasks at the resource-limited devices. At the network edges, devices such as Unmanned Aerial Vehicles (UAVs) and New Energy Vehicles (NEVs) usually lack computing power and security facilities, making it difficult to prevent them from intrusions [198]. In the meantime, containerized deployment has become one of the dominant ways to deploy microservices. Detecting intrusions on containers is thus of great importance, for which ReplicaWatcher [46] is a representative work with a special design for microservices. Additionally, industrial networks are characterized by high fidelity, stability, and real-time responsiveness [110], leading to challenges in adapting DL-IDS to their infrastructures." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 42, + 397, + 441, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 397, + 441, + 434 + ], + "spans": [ + { + "bbox": [ + 42, + 397, + 441, + 434 + ], + "type": "text", + "content": "7.3.4 Future Directions. Although there has been plenty of research on DL-IDS, many applications and scenarios remain underdeveloped. DL-IDS are sought to be more broadly defined and applied. Based on the above discussion, we briefly summarize the future directions as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 442, + 158, + 452 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 442, + 158, + 452 + ], + "spans": [ + { + "bbox": [ + 77, + 442, + 158, + 452 + ], + "type": "text", + "content": "Future Directions" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 77, + 460, + 407, + 542 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 77, + 460, + 407, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 460, + 407, + 493 + ], + "spans": [ + { + "bbox": [ + 77, + 460, + 407, + 493 + ], + "type": "text", + "content": "- Extending the scope of forward tracing and backward tracing to intrusions in a broad sense, so that generating root causes and control measures for the broadly defined intrusions." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 77, + 496, + 407, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 496, + 407, + 518 + ], + "spans": [ + { + "bbox": [ + 77, + 496, + 407, + 518 + ], + "type": "text", + "content": "- Understanding data-driven adversarial attacks such as data poisoning attacks and mimicry attacks for devising more robust DL-IDS." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 77, + 519, + 407, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 519, + 407, + 542 + ], + "spans": [ + { + "bbox": [ + 77, + 519, + 407, + 542 + ], + "type": "text", + "content": "- Applying DL-IDS widely in more underexplored promising scenarios, and if possible, implementing unified frameworks for them." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 43, + 560, + 128, + 570 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 560, + 128, + 570 + ], + "spans": [ + { + "bbox": [ + 43, + 560, + 128, + 570 + ], + "type": "text", + "content": "8 CONCLUSION" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 42, + 576, + 440, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 576, + 440, + 658 + ], + "spans": [ + { + "bbox": [ + 42, + 576, + 440, + 658 + ], + "type": "text", + "content": "The DL techniques bring reform to IDS, whose generalizability enables them to detect intrusions that have never been encountered before. Recognizing that the IDS development over the past decade primarily comes from DL-IDS, this survey revisits the common workflow for DL-IDS, elaborates each module in the workflow, and taxonomizes the research papers innovatively based on their DL techniques. Publicly available datasets for stimulating future research are introduced subsequently. In addition, from the perspective of DL, this survey digs deep into the potential challenges, emerging trends, and future directions for DL-IDS. The discussions suggest to us that" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:26" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 59, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 250, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 250, + 681 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 250, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "bbox": [ + 42, + 85, + 441, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 85, + 441, + 109 + ], + "spans": [ + { + "bbox": [ + 42, + 85, + 441, + 109 + ], + "type": "text", + "content": "DL-IDS are, fascinatingly, in an underdeveloped state. We hope that this survey can somewhat inspire current researchers and facilitate future investigations on DL-IDS." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 43, + 119, + 154, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 119, + 154, + 129 + ], + "spans": [ + { + "bbox": [ + 43, + 119, + 154, + 129 + ], + "type": "text", + "content": "ACKNOWLEDGMENTS" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 42, + 134, + 421, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 134, + 421, + 146 + ], + "spans": [ + { + "bbox": [ + 42, + 134, + 421, + 146 + ], + "type": "text", + "content": "This research is sponsored in part by the NSFC program (No. 6212780016 and No. 62021002)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 155, + 109, + 165 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 155, + 109, + 165 + ], + "spans": [ + { + "bbox": [ + 45, + 155, + 109, + 165 + ], + "type": "text", + "content": "REFERENCES" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 169, + 442, + 658 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 52, + 169, + 442, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 169, + 442, + 190 + ], + "spans": [ + { + "bbox": [ + 52, + 169, + 442, + 190 + ], + "type": "text", + "content": "[1] Nasir Abbas, Yan Zhang, Amir Taherkordi, and Tor Skeie. 2017. Mobile Edge Computing: A Survey. IEEE Internet of Things Journal 5, 1 (2017), 450-465." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 52, + 190, + 441, + 219 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 190, + 441, + 219 + ], + "spans": [ + { + "bbox": [ + 52, + 190, + 441, + 219 + ], + "type": "text", + "content": "[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 52, + 220, + 440, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 220, + 440, + 239 + ], + "spans": [ + { + "bbox": [ + 52, + 220, + 440, + 239 + ], + "type": "text", + "content": "[3] Amey Agrawal, Rohit Karlupia, and Rajat Gupta. 2019. Logan: A Distributed Online Log Parser. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering. IEEE, 1946-1951." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 52, + 240, + 440, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 240, + 440, + 270 + ], + "spans": [ + { + "bbox": [ + 52, + 240, + 440, + 270 + ], + "type": "text", + "content": "[4] Zeeshan Ahmad, Adnan Shahid Khan, Cheah Wai Shiang, Johari Abdullah, and Farhan Ahmad. 2021. Network Intrusion Detection System: A Systematic Study of Machine Learning and Deep Learning Approaches. Transactions on Emerging Telecommunications Technologies 32, 1 (2021), e4150." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 52, + 270, + 440, + 299 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 270, + 440, + 299 + ], + "spans": [ + { + "bbox": [ + 52, + 270, + 440, + 299 + ], + "type": "text", + "content": "[5] Farrukh Ahmed, Urooj Jahangir, Hamad Rahim, Kamran Ali, et al. 2020. Centralized Log Management Using Elasticsearch, Logstash and Kibana. In Proceedings of the 2020 International Conference on Information Science and Communication Technology. IEEE, 1-7." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 52, + 300, + 440, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 300, + 440, + 329 + ], + "spans": [ + { + "bbox": [ + 52, + 300, + 440, + 329 + ], + "type": "text", + "content": "[6] Mohannad Alhanahnah, Shiqing Ma, Ashish Gehani, Gabriela F Ciocarlie, Vinod Yegneswaran, Somesh Jha, and Xiangyu Zhang. 2022. autoMPI: Automated Multiple Perspective Attack Investigation with Semantics Aware Execution Partitioning. IEEE Transactions on Software Engineering 49, 4 (2022), 2761-2775." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 52, + 330, + 440, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 330, + 440, + 349 + ], + "spans": [ + { + "bbox": [ + 52, + 330, + 440, + 349 + ], + "type": "text", + "content": "[7] Tarek Ali. 2024. Next-Generation Intrusion Detection Systems with LLMs: Real-Time Anomaly Detection, Explainable AI, and Adaptive Data Generation. Master's thesis. T. Ali." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 52, + 349, + 440, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 349, + 440, + 369 + ], + "spans": [ + { + "bbox": [ + 52, + 349, + 440, + 369 + ], + "type": "text", + "content": "[8] Miltiadis Allamanis, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Computing Surveys 51, 4 (2018), 1-37." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 52, + 370, + 440, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 52, + 370, + 440, + 398 + ], + "spans": [ + { + "bbox": [ + 52, + 370, + 440, + 398 + ], + "type": "text", + "content": "[9] Abdulellah Alsaheel, Yuhong Nan, Shiqing Ma, Le Yu, Gregory Walkup, Z Berkay Celik, Xiangyu Zhang, and Dongyan Xu. 2021. ATLAS: A Sequence-based Learning Approach for Attack Investigation. In Proceedings of the 30th USENIX Security Symposium. 3005-3022." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 48, + 399, + 440, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 399, + 440, + 429 + ], + "spans": [ + { + "bbox": [ + 48, + 399, + 440, + 429 + ], + "type": "text", + "content": "[10] Adel Alshamrani, Sowmya Myneni, Ankur Chowdhary, and Dijiang Huang. 2019. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Communications Surveys and Tutorials 21, 2 (2019), 1851-1877. https://doi.org/10.1109/COMST.2019.2891891" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 48, + 429, + 440, + 459 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 429, + 440, + 459 + ], + "spans": [ + { + "bbox": [ + 48, + 429, + 440, + 459 + ], + "type": "text", + "content": "[11] Enes Altinisik, Fatih Deniz, and Hürev Taha Sencar. 2023. ProvG-Searcher: A Graph Representation Learning Approach for Efficient Provenance Graph Search. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2247-2261." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 48, + 460, + 307, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 460, + 307, + 468 + ], + "spans": [ + { + "bbox": [ + 48, + 460, + 307, + 468 + ], + "type": "text", + "content": "[12] Clarivate Analytics. 1997. Web of Science. https://www.webofscience.com" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 48, + 469, + 440, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 469, + 440, + 498 + ], + "spans": [ + { + "bbox": [ + 48, + 469, + 440, + 498 + ], + "type": "text", + "content": "[13] Md Monowar Anjum, Shahrear Iqbal, and Benoit Hamelin. 2021. Analyzing the Usefulness of the DARPA OpTC Dataset in Cyber Threat Detection Research. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 27-32." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 48, + 499, + 440, + 518 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 499, + 440, + 518 + ], + "spans": [ + { + "bbox": [ + 48, + 499, + 440, + 518 + ], + "type": "text", + "content": "[14] Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised Speech Recognition. Advances in Neural Information Processing Systems 34 (2021), 27826-27839." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 48, + 519, + 440, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 519, + 440, + 548 + ], + "spans": [ + { + "bbox": [ + 48, + 519, + 440, + 548 + ], + "type": "text", + "content": "[15] Elizabeth Bautista, Nitin Sukhija, and Siqi Deng. 2022. Shasta Log Aggregation, Monitoring and Alerting in HPC Environments with Grafana Loki and ServiceNow. In Proceedings of the 2022 IEEE International Conference on Cluster Computing. IEEE, 602-610." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 48, + 549, + 440, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 549, + 440, + 578 + ], + "spans": [ + { + "bbox": [ + 48, + 549, + 440, + 578 + ], + "type": "text", + "content": "[16] Jack Beerman, David Berent, Zach Falter, and Suman Bhunia. 2023. A Review of Colonial Pipeline Ransomware Attack. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops. IEEE, 8-15." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 48, + 578, + 440, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 578, + 440, + 598 + ], + "spans": [ + { + "bbox": [ + 48, + 578, + 440, + 598 + ], + "type": "text", + "content": "[17] Tristan Bilot, Nour El Madhoun, Khaldoun Al Agha, and Anis Zouaoui. 2023. Graph Neural Networks for Intrusion Detection: A Survey. IEEE Access 11 (2023), 49114-49139." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 48, + 599, + 440, + 628 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 599, + 440, + 628 + ], + "spans": [ + { + "bbox": [ + 48, + 599, + 440, + 628 + ], + "type": "text", + "content": "[18] Tristan Bilot, Baoxiang Jiang, Zefeng Li, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, and Thomas Pasquier. 2025. Sometimes Simpler is Better: A Comprehensive Analysis of State-of-the-Art Provenance-Based Intrusion Detection Systems. In 34th USENIX Security Symposium (USENIX Security 25). 7193-7212." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 48, + 629, + 440, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 629, + 440, + 658 + ], + "spans": [ + { + "bbox": [ + 48, + 629, + 440, + 658 + ], + "type": "text", + "content": "[19] Peter Bodik, Moises Goldszmidt, Armando Fox, Dawn B Woodard, and Hans Andersen. 2010. Fingerprinting the Datacenter: Automated Classification of Performance Crises. In Proceedings of the 5th European Conference on Computer Systems. 111-124." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 43, + 60, + 245, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 60, + 245, + 69 + ], + "spans": [ + { + "bbox": [ + 43, + 60, + 245, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:27" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 673, + 440, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 673, + 440, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 673, + 440, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 87, + 441, + 645 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 48, + 87, + 441, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 87, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 48, + 87, + 441, + 106 + ], + "type": "text", + "content": "[20] Carolin E Brandt, Annibale Panichella, Andy Zaidman, and Moritz Beller. 2020. LogChunks: A Data Set for Build Log Analysis. In Proceedings of the 17th International Conference on Mining Software Repositories. 583-587." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 107, + 440, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 107, + 440, + 125 + ], + "spans": [ + { + "bbox": [ + 48, + 107, + 440, + 125 + ], + "type": "text", + "content": "[21] Robert A Bridges, Tarrah R Glass-Vanderlan, Michael D Iannacone, Maria S Vincent, and Qian Chen. 2019. A Survey of Intrusion Detection Systems Leveraging Host Data. ACM computing surveys 52, 6 (2019), 1-35." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 127, + 441, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 127, + 441, + 156 + ], + "spans": [ + { + "bbox": [ + 48, + 127, + 441, + 156 + ], + "type": "text", + "content": "[22] Dainius Čeponis and Nikolaj Goranin. 2018. Towards A Robust Method of Dataset Generation of Malicious Activity for Anomaly-Based HIDS Training and Presentation of AWSCTD Dataset. *Baltic Journal of Modern Computing* 6, 3 (2018), 217-234." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 48, + 156, + 440, + 177 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 156, + 440, + 177 + ], + "spans": [ + { + "bbox": [ + 48, + 156, + 440, + 177 + ], + "type": "text", + "content": "[23] Xiaolin Chai, Hang Zhang, Jue Zhang, Yan Sun, and Sajal K Das. 2024. Log Sequence Anomaly Detection based on Template and Parameter Parsing via BERT. IEEE Transactions on Dependable and Secure Computing (2024)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 177, + 441, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 177, + 441, + 196 + ], + "spans": [ + { + "bbox": [ + 48, + 177, + 441, + 196 + ], + "type": "text", + "content": "[24] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. arXiv preprint arXiv:1810.00069 (2018)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 48, + 196, + 441, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 196, + 441, + 226 + ], + "spans": [ + { + "bbox": [ + 48, + 196, + 441, + 226 + ], + "type": "text", + "content": "[25] Changhua Chen, Tingzhen Yan, Chenxuan Shi, Hao Xi, Zhirui Fan, Hai Wan, and Xibin Zhao. 2024. The Last Mile of Attack Investigation: Audit Log Analysis towards Software Vulnerability Location. IEEE Transactions on Information Forensics and Security (2024)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 48, + 226, + 441, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 226, + 441, + 246 + ], + "spans": [ + { + "bbox": [ + 48, + 226, + 441, + 246 + ], + "type": "text", + "content": "[26] Haoyu Chen, Deqing Zou, Hai Jin, Shouhuai Xu, and Bin Yuan. 2022. SAND: Semi-Automated Adaptive Network Defense via Programmable Rule Generation and Deployment. Science China Information Sciences 65, 7 (2022), 172102." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 48, + 246, + 440, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 246, + 440, + 266 + ], + "spans": [ + { + "bbox": [ + 48, + 246, + 440, + 266 + ], + "type": "text", + "content": "[27] Tao Chen, Haiyan Suo, and Wenqian Xu. 2023. Design of Log Collection Architecture Based on Cloud Native Technology. In Proceedings of the 2023 4th Information Communication Technologies Conference. IEEE, 311-315." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 267, + 441, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 267, + 441, + 295 + ], + "spans": [ + { + "bbox": [ + 48, + 267, + 441, + 295 + ], + "type": "text", + "content": "[28] Wenrui Cheng, Qixuan Yuan, Tiantian Zhu, Tieming Chen, Jie Ying, Aohan Zheng, Mingjun Ma, Chunlin Xiong, Mingqi Lv, and Yan Chen. 2025. TAGAPT: Towards Automatic Generation of APT Samples with Provenance-level Granularity. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 296, + 441, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 296, + 441, + 325 + ], + "spans": [ + { + "bbox": [ + 48, + 296, + 441, + 325 + ], + "type": "text", + "content": "[29] Zijun Cheng, Qiujian Lv, Jinyuan Liang, Yan Wang, Degang Sun, Thomas Pasquier, and Xueyuan Han. 2024. Kairos: Practical Intrusion Detection and Investigation Using Whole-System Provenance. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3533–3551." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 326, + 441, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 326, + 441, + 354 + ], + "spans": [ + { + "bbox": [ + 48, + 326, + 441, + 354 + ], + "type": "text", + "content": "[30] Guojun Chu, Jingyu Wang, Qi Qi, Haifeng Sun, Shimin Tao, and Jianxin Liao. 2021. Prefix-Graph: A Versatile Log Parsing Approach Merging Prefix Tree with Probabilistic Graph. In Proceedings of the 2021 IEEE 37th International Conference on Data Engineering. IEEE, 2411-2422." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 48, + 355, + 429, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 355, + 429, + 365 + ], + "spans": [ + { + "bbox": [ + 48, + 355, + 429, + 365 + ], + "type": "text", + "content": "[31] The MITRE Corporation. 2025. CVE List. https://github.com/CVEProject/cvelistV5/archive/refs/heads/main.zip" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 48, + 366, + 440, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 366, + 440, + 385 + ], + "spans": [ + { + "bbox": [ + 48, + 366, + 440, + 385 + ], + "type": "text", + "content": "[32] Oihana Coustie, Josiane Mothe, Olivier Teste, and Xavier Baril. 2020. METING: A Robust Log Parser Based on Frequent n-Gram Mining. In Proceedings of the 2020 IEEE International Conference on Web Services. IEEE, 84-88." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 48, + 386, + 441, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 386, + 441, + 415 + ], + "spans": [ + { + "bbox": [ + 48, + 386, + 441, + 415 + ], + "type": "text", + "content": "[33] Jian Cui, Hanna Kim, Eugene Jang, Dayeon Yim, Kicheol Kim, Yongjae Lee, Jin-Woo Chung, Seungwon Shin, and Xiaojing Liao. 2024. Tweezers: A Framework for Security Event Detection via Event Attribution-centric Tweet Embedding. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "spans": [ + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "type": "text", + "content": "[34] Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, and Hugh Leather. 2025. LLM Compiler: Foundation Language Models for Compiler Optimization. In Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction. 141-153." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 48, + 445, + 441, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 445, + 441, + 465 + ], + "spans": [ + { + "bbox": [ + 48, + 445, + 441, + 465 + ], + "type": "text", + "content": "[35] Hetong Dai, Heng Li, Che-Shao Chen, Weiyi Shang, and Tse-Hsun Chen. 2020. Logram: Efficient Log Parsing Using " + }, + { + "bbox": [ + 48, + 445, + 441, + 465 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 48, + 445, + 441, + 465 + ], + "type": "text", + "content": " -Gram Dictionaries. IEEE Transactions on Software Engineering 48, 3 (2020), 879-892." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "spans": [ + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "type": "text", + "content": "[36] Hetong Dai, Yiming Tang, Heng Li, and Weiyi Shang. 2023. PILAR: Studying and Mitigating the Influence of Configurations on Log Parsing. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 818-829." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 48, + 496, + 400, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 496, + 400, + 505 + ], + "spans": [ + { + "bbox": [ + 48, + 496, + 400, + 505 + ], + "type": "text", + "content": "[37] DARPA. 2019. Operationally Transparent Cyber Dataset. https://github.com/FiveDirections/OpTC-data" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 48, + 505, + 441, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 505, + 441, + 525 + ], + "spans": [ + { + "bbox": [ + 48, + 505, + 441, + 525 + ], + "type": "text", + "content": "[38] DARPA. 2022. The DARPA Transparent Computing (TC) program Data Release. https://github.com/darpa-i2o/Transparent-Computing" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 48, + 525, + 441, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 525, + 441, + 555 + ], + "spans": [ + { + "bbox": [ + 48, + 525, + 441, + 555 + ], + "type": "text", + "content": "[39] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 48, + 555, + 441, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 555, + 441, + 575 + ], + "spans": [ + { + "bbox": [ + 48, + 555, + 441, + 575 + ], + "type": "text", + "content": "[40] Hailun Ding, Juan Zhai, Dong Deng, and Shiqing Ma. 2023. The Case for Learned Provenance Graph Storage Systems. In Proceedings of the 32nd USENIX Security Symposium. 3277-3294." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 48, + 575, + 441, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 575, + 441, + 595 + ], + "spans": [ + { + "bbox": [ + 48, + 575, + 441, + 595 + ], + "type": "text", + "content": "[41] Hailun Ding, Juan Zhai, Yuhong Nan, and Shiqing Ma. 2023. AirTag: Towards Automated Attack Investigation by Unsupervised Learning with Log Texts. In Proceedings of the 32nd USENIX Security Symposium. 373-390." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 48, + 596, + 441, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 596, + 441, + 624 + ], + "spans": [ + { + "bbox": [ + 48, + 596, + 441, + 624 + ], + "type": "text", + "content": "[42] Feng Dong, Liu Wang, Xu Nie, Fei Shao, Haoyu Wang, Ding Li, Xiapu Luo, and Xusheng Xiao. 2023. DistDet: A Cost-Effective Distributed Cyber Threat Detection System. In Proceedings of the 32nd USENIX Security Symposium. 6575–6592." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 48, + 624, + 440, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 624, + 440, + 645 + ], + "spans": [ + { + "bbox": [ + 48, + 624, + 440, + 645 + ], + "type": "text", + "content": "[43] Ying Dong, Yuqing Zhang, Hua Ma, Qianru Wu, Qixu Liu, Kai Wang, and Wenjie Wang. 2018. An Adaptive System for Detecting Malicious Queries in Web Attacks. Science China Information Sciences 61, 3 (2018), 032114." + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:28" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 86, + 441, + 655 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 48, + 86, + 441, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 86, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 48, + 86, + 441, + 106 + ], + "type": "text", + "content": "[44] Min Du and Feifei Li. 2016. Spell: Streaming Parsing of System Event Logs. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining. IEEE, 859-864." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 107, + 441, + 136 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 107, + 441, + 136 + ], + "spans": [ + { + "bbox": [ + 48, + 107, + 441, + 136 + ], + "type": "text", + "content": "[45] Min Du, Feifei Li, Guineng Zheng, and Vivek Srikumar. 2017. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1285-1298." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 136, + 441, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 136, + 441, + 167 + ], + "spans": [ + { + "bbox": [ + 48, + 136, + 441, + 167 + ], + "type": "text", + "content": "[46] Asbat El Khairi, Marco Caselli, Andreas Peter, and Andrea Continella. 2024. REPLICAWATCHER: Training-less Anomaly Detection in Containerized Microservices. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 48, + 167, + 359, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 167, + 359, + 176 + ], + "spans": [ + { + "bbox": [ + 48, + 167, + 359, + 176 + ], + "type": "text", + "content": "[47] Elastic. 2009. Logstash: Collect, parse, and transform logs. https://www.elastic.co/logstash/" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 177, + 435, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 177, + 435, + 186 + ], + "spans": [ + { + "bbox": [ + 48, + 177, + 435, + 186 + ], + "type": "text", + "content": "[48] Elastic. 2010. Elasticsearch: The official distributed search & analytics engine. https://www.elastic.co/elasticsearch/" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 48, + 187, + 359, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 187, + 359, + 196 + ], + "spans": [ + { + "bbox": [ + 48, + 187, + 359, + 196 + ], + "type": "text", + "content": "[49] Elastic. 2013. Kibana: Explore, visualize, and discover data. https://www.elastic.co/kibana/" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 48, + 197, + 349, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 197, + 349, + 206 + ], + "spans": [ + { + "bbox": [ + 48, + 197, + 349, + 206 + ], + "type": "text", + "content": "[50] Elsevier. 2021. Scopus. https://www.scopus.com/search/form.uri?display=basic{\\#}basic" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 48, + 207, + 441, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 207, + 441, + 226 + ], + "spans": [ + { + "bbox": [ + 48, + 207, + 441, + 226 + ], + "type": "text", + "content": "[51] Dave Evans. 2012. The Internet of Everything: How More Relevant and Valuable Connections will Change the World. Cisco IBSG 2012 (2012), 1-9." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 226, + 441, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 226, + 441, + 255 + ], + "spans": [ + { + "bbox": [ + 48, + 226, + 441, + 255 + ], + "type": "text", + "content": "[52] Pengcheng Fang, Peng Gao, Changlin Liu, Erman Ayday, Kangkook Jee, Ting Wang, Yanfang Fanny Ye, Zhuotao Liu, and Xusheng Xiao. 2022. Back-Propagating System Dependency Impact for Attack Investigation. In Proceedings of the 31st USENIX Security Symposium. 2461–2478." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 257, + 440, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 257, + 440, + 276 + ], + "spans": [ + { + "bbox": [ + 48, + 257, + 440, + 276 + ], + "type": "text", + "content": "[53] Peng Fei, Zhou Li, Zhiying Wang, Xiao Yu, Ding Li, and Kangkook Jee. 2021. SEAL: Storage-Efficient Causality Analysis on Enterprise Logs with Query-Friendly Compression. In Proceedings of the 30th USENIX Security Symposium." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 277, + 441, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 277, + 441, + 306 + ], + "spans": [ + { + "bbox": [ + 48, + 277, + 441, + 306 + ], + "type": "text", + "content": "[54] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. 1536-1547." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 48, + 306, + 405, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 306, + 405, + 316 + ], + "spans": [ + { + "bbox": [ + 48, + 306, + 405, + 316 + ], + "type": "text", + "content": "[55] Free Software Foundation. 1992. gzip: GNU zip compression utility. https://www.gnu.org/software/gzip/" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 48, + 316, + 440, + 335 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 316, + 440, + 335 + ], + "spans": [ + { + "bbox": [ + 48, + 316, + 440, + 335 + ], + "type": "text", + "content": "[56] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2021. Realtime Robust Malicious Traffic Detection via Frequency Domain Analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 3431-3446." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 48, + 336, + 441, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 336, + 441, + 365 + ], + "spans": [ + { + "bbox": [ + 48, + 336, + 441, + 365 + ], + "type": "text", + "content": "[57] Chuanpu Fu, Qi Li, Meng Shen, and Ke Xu. 2024. Detecting Tunnelled Flooding Traffic via Deep Semantic Analysis of Packet Length Patterns. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3659-3673." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 48, + 366, + 441, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 366, + 441, + 396 + ], + "spans": [ + { + "bbox": [ + 48, + 366, + 441, + 396 + ], + "type": "text", + "content": "[58] Chuanpu Fu, Qi Li, Ke Xu, and Jianping Wu. 2023. Point Cloud Analysis for ML-based Malicious Traffic Detection: Reducing Majorities of False Positive Alarms. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 1005-1019." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 48, + 396, + 441, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 396, + 441, + 426 + ], + "spans": [ + { + "bbox": [ + 48, + 396, + 441, + 426 + ], + "type": "text", + "content": "[59] Oscar G. Lira, Alberto Marroquin, and Marco Antonio To. 2024. Harnessing the Advanced Capabilities of LLM for Adaptive Intrusion Detection Systems. In Proceedings of the International Conference on Advanced Information Networking and Applications. Springer, 453-464." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 48, + 426, + 441, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 426, + 441, + 455 + ], + "spans": [ + { + "bbox": [ + 48, + 426, + 441, + 455 + ], + "type": "text", + "content": "[60] Peng Gao, Xusheng Xiao, Zhichun Li, Fengyuan Xu, Sanjeev R Kulkarni, and Prateek Mittal. 2018. AIQL: Enabling Efficient Attack Investigation from System Monitoring Data. In Proceedings of the 2018 USENIX Annual Technical Conference. 113-126." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 48, + 455, + 441, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 455, + 441, + 485 + ], + "spans": [ + { + "bbox": [ + 48, + 455, + 441, + 485 + ], + "type": "text", + "content": "[61] Ashish Gehani and Dawood Tariq. 2012. SPADE: Support for Provenance Auditing in Distributed Environments. In Proceedings of the ACM/IFIP/USENIX International Conference on Distributed Systems Platforms and Open Distributed Processing. Springer, 101-120." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 48, + 486, + 441, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 486, + 441, + 515 + ], + "spans": [ + { + "bbox": [ + 48, + 486, + 441, + 515 + ], + "type": "text", + "content": "[62] Jalal Ghadermazi, Soumyadeep Hore, Ankit Shah, and Nathaniel D Bastian. 2025. GTAE-IDS: Graph Transformer-Based Autoencoder Framework for Real-Time Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 48, + 516, + 441, + 535 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 516, + 441, + 535 + ], + "spans": [ + { + "bbox": [ + 48, + 516, + 441, + 535 + ], + "type": "text", + "content": "[63] Joshua Glasser and Brian Lindauer. 2013. Bridging the gap: A Pragmatic Approach to Generating Insider Threat Data. In Proceedings of the IEEE Symposium on Security and Privacy Workshops. IEEE, 98-104." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 48, + 536, + 441, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 536, + 441, + 565 + ], + "spans": [ + { + "bbox": [ + 48, + 536, + 441, + 565 + ], + "type": "text", + "content": "[64] Akul Goyal, Xueyuan Han, Gang Wang, and Adam Bates. 2023. Sometimes, You Aren't What You Do: Mimicry Attacks Against Provenance Graph Host Intrusion Detection Systems. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 48, + 565, + 440, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 565, + 440, + 585 + ], + "spans": [ + { + "bbox": [ + 48, + 565, + 440, + 585 + ], + "type": "text", + "content": "[65] Akul Goyal, Gang Wang, and Adam Bates. 2024. R-caid: Embedding Root Cause Analysis within Provenance-Based Intrusion Detection. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3515-3532." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 48, + 586, + 440, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 586, + 440, + 604 + ], + "spans": [ + { + "bbox": [ + 48, + 586, + 440, + 604 + ], + "type": "text", + "content": "[66] Brendan Gregg and Jim Mauro. 2011. DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X, and FreeBSD. Prentice Hall Professional." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 48, + 604, + 441, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 604, + 441, + 644 + ], + "spans": [ + { + "bbox": [ + 48, + 604, + 441, + 644 + ], + "type": "text", + "content": "[67] John Griffith, Derrick Kong, Armando Caro, Brett Benyo, Joud Khoury, Timothy Upthegrove, Timothy Christovich, Stanislav Ponomorov, Ali Sydney, Arjun Saini, et al. 2020. Scalable Transparency Architecture for Research Collaboration (STARC)-DARPA Transparent Computing (TC) Program. *Raytheon BBN Technologies Corporation Cambridge United States* (2020)." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 48, + 645, + 303, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 645, + 303, + 655 + ], + "spans": [ + { + "bbox": [ + 48, + 645, + 303, + 655 + ], + "type": "text", + "content": "[68] Steve Grubb. 2008. Linux audit. https://people.redhat.com/sgrubb/audit/" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:29" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 28 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 86, + 442, + 645 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 48, + 86, + 442, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 86, + 442, + 115 + ], + "spans": [ + { + "bbox": [ + 48, + 86, + 442, + 115 + ], + "type": "text", + "content": "[69] Qiuhan Gu. 2023. LLM-Based Code Generation Method for Golang Compiler Testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2201-2203." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 115, + 441, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 115, + 441, + 146 + ], + "spans": [ + { + "bbox": [ + 48, + 115, + 441, + 146 + ], + "type": "text", + "content": "[70] Xiaodong Gu, Meng Chen, Yalan Lin, Yuhan Hu, Hongyu Zhang, Chengcheng Wan, Zhao Wei, Yong Xu, and Juhong Wang. 2025. On the Effectiveness of Large Language Models in Domain-Specific Code Generation. ACM Transactions on Software Engineering and Methodology 34, 3 (2025), 1-22." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 147, + 441, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 147, + 441, + 176 + ], + "spans": [ + { + "bbox": [ + 48, + 147, + 441, + 176 + ], + "type": "text", + "content": "[71] Yiwei Guo, Chenpeng Du, Ziyang Ma, Xie Chen, and Kai Yu. 2024. Voiceflow: Efficient Text-to-Speech with Rectified Flow Matching. In Proceedings of the ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 11121-11125." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 48, + 177, + 440, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 177, + 440, + 196 + ], + "spans": [ + { + "bbox": [ + 48, + 177, + 440, + 196 + ], + "type": "text", + "content": "[72] Yi Guo, Fu Miao, Liancheng Zhang, and Yu Wang. 2019. CATH: An Effective Method for Detecting Denial-of-Service Attacks in Software Defined Networks. Science China Information Sciences 62, 3 (2019), 32106." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 196, + 440, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 196, + 440, + 216 + ], + "spans": [ + { + "bbox": [ + 48, + 196, + 440, + 216 + ], + "type": "text", + "content": "[73] Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. Advances in Neural Information Processing Systems 30 (2017)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 48, + 216, + 441, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 216, + 441, + 246 + ], + "spans": [ + { + "bbox": [ + 48, + 216, + 441, + 246 + ], + "type": "text", + "content": "[74] Hossein Hamooni, Biplob Debnath, Jianwu Xu, Hui Zhang, Guofei Jiang, and Abdullah Mueen. 2016. LogMine: Fast Pattern Recognition for Log Analytics. In Proceedings of the ACM International on Conference on Information and Knowledge Management. 1573-1582." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 48, + 246, + 440, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 246, + 440, + 276 + ], + "spans": [ + { + "bbox": [ + 48, + 246, + 440, + 276 + ], + "type": "text", + "content": "[75] Dongqi Han, Zhiliang Wang, Wenqi Chen, Kai Wang, Rui Yu, Su Wang, Han Zhang, Zhihua Wang, Minghui Jin, Jiahai Yang, et al. 2023. Anomaly Detection in the Open World: Normality Shift Detection, Explanation, and Adaptation. In Proceedings of the Network and Distributed Systems Security Symposium." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 48, + 276, + 441, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 276, + 441, + 306 + ], + "spans": [ + { + "bbox": [ + 48, + 276, + 441, + 306 + ], + "type": "text", + "content": "[76] Xueyuan Han, Thomas Pasquier, Adam Bates, James Mickens, and Margo Seltzer. 2020. *Unicorn: Runtime Provenance-Based Detector for Advanced Persistent Threats*. In *Proceedings of the Network and Distributed Systems Security Symposium*." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 306, + 441, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 306, + 441, + 336 + ], + "spans": [ + { + "bbox": [ + 48, + 306, + 441, + 336 + ], + "type": "text", + "content": "[77] Wajih Ul Hassan, Lemay Aguse, Nuraini Aguse, Adam Bates, and Thomas Moyer. 2018. Towards Scalable Cluster Auditing through Grammatical Inference over Provenance Graphs. In Proceedings of the Network and Distributed Systems Security Symposium." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 336, + 440, + 355 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 336, + 440, + 355 + ], + "spans": [ + { + "bbox": [ + 48, + 336, + 440, + 355 + ], + "type": "text", + "content": "[78] Wajih Ul Hassan, Adam Bates, and Daniel Marino. 2020. Tactical Provenance Analysis for Endpoint Detection and Response Systems. In Proceedings of the 2020 IEEE symposium on security and privacy. IEEE, 1172-1189." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 356, + 441, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 356, + 441, + 385 + ], + "spans": [ + { + "bbox": [ + 48, + 356, + 441, + 385 + ], + "type": "text", + "content": "[79] Wajih Ul Hassan, Shengjian Guo, Ding Li, Zhengzhang Chen, Kangkook Jee, Zhichun Li, and Adam Bates. 2019. Nodoze: Combatting Threat Alert Fatigue with Automated Provenance Triage. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 48, + 385, + 441, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 385, + 441, + 415 + ], + "spans": [ + { + "bbox": [ + 48, + 385, + 441, + 415 + ], + "type": "text", + "content": "[80] Wajih Ul Hassan, Mohammad Ali Noureddine, Pubali Datta, and Adam Bates. 2020. OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-Layer Log Analysis. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "spans": [ + { + "bbox": [ + 48, + 416, + 441, + 444 + ], + "type": "text", + "content": "[81] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. 2022. Masked Autoencoders are Scalable Vision Learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000-16009." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 48, + 444, + 441, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 444, + 441, + 465 + ], + "spans": [ + { + "bbox": [ + 48, + 444, + 441, + 465 + ], + "type": "text", + "content": "[82] Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R Lyu. 2017. Drain: An Online Log Parsing Approach with Fixed Depth Tree. In Proceedings of the 2017 IEEE International Conference on Web Services. IEEE, 33-40." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "spans": [ + { + "bbox": [ + 48, + 465, + 441, + 495 + ], + "type": "text", + "content": "[83] Shilin He, Pinjia He, Zhuangbin Chen, Tianyi Yang, Yuxin Su, and Michael R. Lyu. 2020. A Survey on Automated Log Analysis for Reliability Engineering. ACM Computing Surveys 54 (2020), 1 - 37. https://api-semanticscholar.org/CorpusID:221703032" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 48, + 496, + 441, + 524 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 496, + 441, + 524 + ], + "spans": [ + { + "bbox": [ + 48, + 496, + 441, + 524 + ], + "type": "text", + "content": "[84] Xinlei He, Guowen Xu, Xingshuo Han, Qian Wang, Lingchen Zhao, Chao Shen, Chenhao Lin, Zhengyu Zhao, Qian Li, Le Yang, et al. 2025. Artificial intelligence security and privacy: a survey. Science China Information Sciences 68, 8 (2025), 1-90." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 48, + 524, + 441, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 524, + 441, + 545 + ], + "spans": [ + { + "bbox": [ + 48, + 524, + 441, + 545 + ], + "type": "text", + "content": "[85] Abram Hindle, Earl T Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the Naturalness of Software. Commun. ACM 59, 5 (2016), 122-131." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 48, + 545, + 440, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 545, + 440, + 555 + ], + "spans": [ + { + "bbox": [ + 48, + 545, + 440, + 555 + ], + "type": "text", + "content": "[86] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation 9, 8 (1997), 1735-1780." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 48, + 555, + 441, + 584 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 555, + 441, + 584 + ], + "spans": [ + { + "bbox": [ + 48, + 555, + 441, + 584 + ], + "type": "text", + "content": "[87] Josef Horalek, Patrik Urbanik, Vladimir Sobeslav, and Tomas Svoboda. 2022. Proposed Solution for Log Collection and Analysis in Kubernetes Environment. In Proceedings of the International Conference on Nature of Computation and Communication. Springer, 9-22." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 48, + 585, + 441, + 614 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 585, + 441, + 614 + ], + "spans": [ + { + "bbox": [ + 48, + 585, + 441, + 614 + ], + "type": "text", + "content": "[88] Md Nahid Hossain, Sadegh M Milajerdi, Junao Wang, Birhanu Eshete, Rigel Gjomemo, R Sekar, Scott Stoller, and VN Venkatakrishnan. 2017. Sleuth: Real-time Attack Scenario Reconstruction from COTS Audit Data. In Proceedings of the USENIX Security Symposium. 487-504." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 48, + 615, + 441, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 615, + 441, + 645 + ], + "spans": [ + { + "bbox": [ + 48, + 615, + 441, + 645 + ], + "type": "text", + "content": "[89] Md Nahid Hossain, Junao Wang, Ofir Weisse, R Sekar, Daniel Genkin, Boyuan He, Scott D Stoller, Gan Fang, Frank Piessens, Evan Downing, et al. 2018. Dependence-Preserving Data Compaction for Scalable Forensic Analysis. In Proceedings of the 27th USENIX Security Symposium. 1723-1740." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:30" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 43, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 29 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 442, + 655 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 48, + 86, + 441, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 86, + 441, + 115 + ], + "spans": [ + { + "bbox": [ + 48, + 86, + 441, + 115 + ], + "type": "text", + "content": "[90] Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie Wang, and Jie Tang. 2022. GraphMAE: Self-Supervised Masked Graph Autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 594-604." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 48, + 116, + 442, + 154 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 116, + 442, + 154 + ], + "spans": [ + { + "bbox": [ + 48, + 116, + 442, + 154 + ], + "type": "text", + "content": "[91] Kevin Hsieh, Mike Wong, Santiago Segarra, Sathiya Kumaran Mani, Trevor Eberl, Anatoliy Panasyuk, Ravi Netravali, Ranveer Chandra, and Srikanth Kandula. 2024. NetVigil: Robust and Low-Cost Anomaly Detection for East-West Data Center Security. In Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation. 1771-1789." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 156, + 442, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 156, + 442, + 176 + ], + "spans": [ + { + "bbox": [ + 48, + 156, + 442, + 176 + ], + "type": "text", + "content": "[92] Peiwei Hu, Ruigang Liang, and Kai Chen. 2024. DeGPT: Optimizing Decompile Output with LLM. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 48, + 177, + 441, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 177, + 441, + 205 + ], + "spans": [ + { + "bbox": [ + 48, + 177, + 441, + 205 + ], + "type": "text", + "content": "[93] Bo Hui, Haolin Yuan, Neil Gong, Philippe Burlina, and Yinzhi Cao. 2024. Pleak: Prompt Leaking Attacks Against Large Language Model Applications. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security. 3600-3614." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 206, + 441, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 206, + 441, + 235 + ], + "spans": [ + { + "bbox": [ + 48, + 206, + 441, + 235 + ], + "type": "text", + "content": "[94] Yintong Huo, Yichen Li, Yuxin Su, Pinjia He, Zifan Xie, and Michael R Lyu. 2023. AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection. In Proceedings of the 2023 38th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 497-509." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 48, + 236, + 284, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 236, + 284, + 246 + ], + "spans": [ + { + "bbox": [ + 48, + 236, + 284, + 246 + ], + "type": "text", + "content": "[95] IEEE. 2000. IEEE Xplore Digital Library. https://ieeexplore.ieee.org" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 48, + 246, + 442, + 284 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 246, + 442, + 284 + ], + "spans": [ + { + "bbox": [ + 48, + 246, + 442, + 284 + ], + "type": "text", + "content": "[96] Muhammad Adil Inam, Yinfang Chen, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih Ul Hassan. 2023. SoK: History is a Vast Early Warning System: Auditing the Provenance of System Intrusions. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. 2620-2638. https://doi.org/10.1109/SP46215.2023.10179405" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 48, + 285, + 441, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 285, + 441, + 315 + ], + "spans": [ + { + "bbox": [ + 48, + 285, + 441, + 315 + ], + "type": "text", + "content": "[97] Muhammad Adil Inam, Akul Goyal, Jason Liu, Jaron Mink, Noor Michael, Sneha Gaur, Adam Bates, and Wajih UI Hassan. 2022. FAuST: Striking A Bargain between Forensic Auditing's Security and Throughput. In Proceedings of the 38th Annual Computer Security Applications Conference. 813-826." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 316, + 441, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 316, + 441, + 346 + ], + "spans": [ + { + "bbox": [ + 48, + 316, + 441, + 346 + ], + "type": "text", + "content": "[98] Yang Ji, Sangho Lee, Evan Downing, Weiren Wang, Mattia Fazzini, Taesoo Kim, Alessandro Orso, and Wenke Lee. 2017. Rain: Refinable Attack Investigation with On-demand Inter-Process Information Flow Tracking. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 377–390." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 346, + 441, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 346, + 441, + 365 + ], + "spans": [ + { + "bbox": [ + 48, + 346, + 441, + 365 + ], + "type": "text", + "content": "[99] Zian Jia, Yun Xiong, Yuhong Nan, Yao Zhang, Jinjing Zhao, and Mi Wen. 2024. MAGIC: Detecting Advanced Persistent Threats via Masked Graph Representation Learning. In Proceedings of the 33rd USENIX Security Symposium. 5197-5214." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 366, + 440, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 366, + 440, + 396 + ], + "spans": [ + { + "bbox": [ + 44, + 366, + 440, + 396 + ], + "type": "text", + "content": "[100] Baoxiang Jiang, T Bilot, Nour El Madhoun, Khaldoun Al Agha, Anis Zouaoui, Shahrear Iqbal, Xueyuan Han, and Thomas Pasquier. 2025. Orthrus: Achieving High Quality of Attribution in Provenance-based Intrusion Detection Systems. In Proceedings of the USENIX Security Symposium." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 396, + 440, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 396, + 440, + 415 + ], + "spans": [ + { + "bbox": [ + 44, + 396, + 440, + 415 + ], + "type": "text", + "content": "[101] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the Limits of Language Modeling. arXiv preprint arXiv:1602.02410 (2016)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 416, + 440, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 416, + 440, + 444 + ], + "spans": [ + { + "bbox": [ + 44, + 416, + 440, + 444 + ], + "type": "text", + "content": "[102] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 (2020)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 445, + 440, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 445, + 440, + 465 + ], + "spans": [ + { + "bbox": [ + 44, + 445, + 440, + 465 + ], + "type": "text", + "content": "[103] Alexander D. Kent. 2015. Comprehensive, Multi-Source Cyber-Security Events. Los Alamos National Laboratory. https://doi.org/10.17021/1179829" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 465, + 440, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 465, + 440, + 485 + ], + "spans": [ + { + "bbox": [ + 44, + 465, + 440, + 485 + ], + "type": "text", + "content": "[104] LG Kersta, PD Bricker, and EE David Jr. 1960. Human or Machine?—A Study of Voice Naturalness. The Journal of the Acoustical Society of America 32, 11_Supplement (1960), 1502-1502." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 486, + 440, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 486, + 440, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 486, + 440, + 505 + ], + "type": "text", + "content": "[105] Ansam Khraisat, Iqbal Gondal, Peter Vamplew, and Joarder Kamruzzaman. 2019. Survey of Intrusion Detection Systems: Techniques, Datasets and Challenges. Cybersecurity 2, 1 (2019), 1-22." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 505, + 375, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 505, + 375, + 514 + ], + "spans": [ + { + "bbox": [ + 44, + 505, + 375, + 514 + ], + "type": "text", + "content": "[106] Aaron Kili. [n.d.]. Sysdig-A Powerful System Monitoring and Troubleshooting Tool for Linux." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 515, + 440, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 515, + 440, + 545 + ], + "spans": [ + { + "bbox": [ + 44, + 515, + 440, + 545 + ], + "type": "text", + "content": "[107] Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. \"I'm Not Sure, But...\": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 822-835." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 545, + 440, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 545, + 440, + 565 + ], + "spans": [ + { + "bbox": [ + 44, + 545, + 440, + 565 + ], + "type": "text", + "content": "[108] Isaiah J King and H Howie Huang. 2023. Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction. ACM Transactions on Privacy and Security 26, 3 (2023), 1-36." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 565, + 434, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 565, + 434, + 575 + ], + "spans": [ + { + "bbox": [ + 44, + 565, + 434, + 575 + ], + "type": "text", + "content": "[109] Thomas N Kipf and Max Welling. 2016. Variational Graph Auto-Encoders. arXiv preprint arXiv:1611.07308 (2016)." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 576, + 440, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 576, + 440, + 594 + ], + "spans": [ + { + "bbox": [ + 44, + 576, + 440, + 594 + ], + "type": "text", + "content": "[110] Eric D Knapp. 2024. Industrial Network Security: Securing Critical Infrastructure Networks for Smart Grid, SCADA, and other Industrial Control Systems. Elsevier." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 595, + 440, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 595, + 440, + 624 + ], + "spans": [ + { + "bbox": [ + 44, + 595, + 440, + 624 + ], + "type": "text", + "content": "[111] Yonghwi Kwon, Fei Wang, Weihang Wang, Kyu Hyung Lee, Wen-Chuan Lee, Shiqing Ma, Xiangyu Zhang, Dongyan Xu, Somesh Jha, Gabriela Ciocarlie, et al. 2018. MCI: Modeling-based Causality Inference in Audit Logging for Attack Investigation. In Proceedings of the Network and Distributed Systems Security Symposium." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 44, + 625, + 341, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 625, + 341, + 635 + ], + "spans": [ + { + "bbox": [ + 44, + 625, + 341, + 635 + ], + "type": "text", + "content": "[112] Grafana Labs. 2014. Grafana: The Open Observability Platform. https://grafana.com/" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "type": "text", + "content": "[113] Van-Hoang Le and Hongyu Zhang. 2021. Log-Based Anomaly Detection without Log Parsing. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 492-504." + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 69 + ], + "type": "text", + "content": "1:31" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 30 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 645 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "text", + "content": "[114] Van-Hoang Le and Hongyu Zhang. 2023. Log Parsing with Prompt-Based Few-Shot Learning. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 2438-2449." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "spans": [ + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "type": "text", + "content": "[115] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. High Accuracy Attack Provenance via Binary-based Execution Partition. In Proceedings of the Network and Distributed System Security Symposium, Vol. 16." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "text", + "content": "[116] Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2013. LogGC: Garbage Collecting Audit Log. In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security. 1005-1016." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "spans": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "text", + "content": "[117] Huanruo Li, Yunfei Guo, Shumin Huo, Hongchao Hu, and Penghao Sun. 2022. Defensive Deception Framework Against Reconnaissance Attacks in the Cloud with Deep Reinforcement Learning. Science China Information Sciences 65, 7 (2022), 170305." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 177, + 440, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 177, + 440, + 196 + ], + "spans": [ + { + "bbox": [ + 44, + 177, + 440, + 196 + ], + "type": "text", + "content": "[118] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ConLBS: An Attack Investigation Approach Using Contrastive Learning with Behavior Sequence. Sensors 23, 24 (2023), 9881." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 197, + 440, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 197, + 440, + 216 + ], + "spans": [ + { + "bbox": [ + 44, + 197, + 440, + 216 + ], + "type": "text", + "content": "[119] Jiawei Li, Ru Zhang, and Jianyi Liu. 2023. ProvGRP: A Context-Aware Provenance Graph Reduction and Partition Approach for Facilitating Attack Investigation. *Electronics* 13, 1 (2023), 100." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 216, + 441, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 216, + 441, + 246 + ], + "spans": [ + { + "bbox": [ + 44, + 216, + 441, + 246 + ], + "type": "text", + "content": "[120] Shaofei Li, Feng Dong, Xusheng Xiao, Haoyu Wang, Fei Shao, Jiedong Chen, Yao Guo, Xiangqun Chen, and Ding Li. 2024. NodLink: An Online System for Fine-Grained APT Attack Detection and Investigation. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 246, + 440, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 246, + 440, + 266 + ], + "spans": [ + { + "bbox": [ + 44, + 246, + 440, + 266 + ], + "type": "text", + "content": "[121] Teng Li, Jianfeng Ma, and Cong Sun. 2017. NetPro: Detecting Attacks in MANET Routing with Provenance and Verification. Science China Information Sciences 60, 11 (2017), 118101." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "spans": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "text", + "content": "[122] Xiaoxiang Li, Xinyu Jiang, Hai Wan, and Xinbin Zhao. 2025. TeRed: Normal Behavior-Based Efficient Provenance Graph Reduction for Large-Scale Attack Forensics. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "spans": [ + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "type": "text", + "content": "[123] Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, and Pengfei Chen. 2024. LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering. 1-12." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 316, + 441, + 345 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 316, + 441, + 345 + ], + "spans": [ + { + "bbox": [ + 44, + 316, + 441, + 345 + ], + "type": "text", + "content": "[124] Yujia Li, David Choi, Junyoung Chung, Nate Kushner, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. 2022. Competition-Level Code Generation with Alphacode. Science 378, 6624 (2022), 1092-1097." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "spans": [ + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "type": "text", + "content": "[125] Yanjie Li, Zhen Xiang, Nathaniel D Bastian, Dawn Song, and Bo Li. 2024. IDS-Agent: An LLM Agent for Explanable Intrusion Detection in IoT Networks. In Proceedings of the NeurIPS 2024 Workshop on Open-World Agents." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 366, + 441, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 366, + 441, + 396 + ], + "spans": [ + { + "bbox": [ + 44, + 366, + 441, + 396 + ], + "type": "text", + "content": "[126] Yuanlin Li, Zhiwei Xu, Min Zhou, Hai Wan, and Xibin Zhao. 2024. Trident: Detecting SQL Injection Attacks via Abstract Syntax Tree-based Neural Network. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 2225-2229." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 396, + 441, + 425 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 396, + 441, + 425 + ], + "spans": [ + { + "bbox": [ + 44, + 396, + 441, + 425 + ], + "type": "text", + "content": "[127] Zhenyuan Li, Qi Alfred Chen, Runqing Yang, Yan Chen, and Wei Ruan. 2021. Threat Detection and Investigation with System-Level Provenance Graphs: A Survey. Computer and Security 106, C (jul 2021), 16 pages. https://doi.org/10.1016/j.cose.2021.102282" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 426, + 441, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 426, + 441, + 445 + ], + "spans": [ + { + "bbox": [ + 44, + 426, + 441, + 445 + ], + "type": "text", + "content": "[128] Hung-Jen Liao, Chun-Hung Richard Lin, Ying-Chih Lin, and Kuang-Yuan Tung. 2013. Intrusion Detection System: A Comprehensive Review. Journal of Network and Computer Applications 36, 1 (2013), 16-24." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 446, + 441, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 446, + 441, + 465 + ], + "spans": [ + { + "bbox": [ + 44, + 446, + 441, + 465 + ], + "type": "text", + "content": "[129] Soo Yee Lim, Bogdan Stelea, Xueyuan Han, and Thomas Pasquier. 2021. Secure Namespaced Kernel Audit for Containers. In Proceedings of the ACM Symposium on Cloud Computing. 518-532." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 466, + 441, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 466, + 441, + 495 + ], + "spans": [ + { + "bbox": [ + 44, + 466, + 441, + 495 + ], + "type": "text", + "content": "[130] Qingwei Lin, Hongyu Zhang, Jian-Guang Lou, Yu Zhang, and Xuewei Chen. 2016. Log Clustering Based Problem Identification for Online Service Systems. In Proceedings of the International Conference on Software Engineering Companion. 102-111." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 496, + 389, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 496, + 389, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 496, + 389, + 505 + ], + "type": "text", + "content": "[131] Brian Lindauer. 2020. Insider Threat Test Dataset. (9 2020). https://doi.org/10.1184/R1/12841247.v1" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "spans": [ + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "type": "text", + "content": "[132] Guangrui Liu, Weizhe Zhang, Xinjie Li, Kaisheng Fan, and Shui Yu. 2022. VulnERGAN: A Backdoor Attack through Vulnerability Amplification against Machine Learning-Based Network Intrusion Detection Systems. Science China Information Sciences 65, 7 (2022), 170303." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "spans": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "text", + "content": "[133] Jason Liu, Muhammad Adil Inam, Akul Goyal, Andy Riddle, Kim Westfall, and Adam Bates. 2025. What We Talk About When We Talk About Logs: Understanding the Effects of Dataset Quality on Endpoint Threat Detection Research. In Proceedings of the 2025 IEEE Symposium on Security and Privacy. IEEE, 112-129." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "spans": [ + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "type": "text", + "content": "[134] Jian Liu, Junjie Yan, Zhengwei Jiang, Xuren Wang, and Jun Jiang. 2022. A Graph Learning Approach with Audit Records for Advanced Attack Investigation. In Proceedings of the IEEE Global Communications Conference. IEEE, 897-902." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "spans": [ + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "type": "text", + "content": "[135] Jinyang Liu, Jieming Zhu, Shilin He, Pinjia He, Zibin Zheng, and Michael R Lyu. 2019. Logzip: Extracting Hidden Structures via Iterative Clustering for Log Compression. In Proceedings of the 2019 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 863-873." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 44, + 625, + 440, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 625, + 440, + 645 + ], + "spans": [ + { + "bbox": [ + 44, + 625, + 440, + 645 + ], + "type": "text", + "content": "[136] Shuai Liu, Yiheng Pan, Kun Hong, Ruite Fei, Chenhao Lin, Qian Li, and Chao Shen. 2025. Backdoor Threats in Large Language Models—A Survey. Science China Information Sciences 68, 9 (2025), 1-34." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:32" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 31 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 655 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "text", + "content": "[137] Yudong Liu, Xu Zhang, Shilin He, Hongyu Zhang, Liquin Li, Yu Kang, Yong Xu, Minghua Ma, Qingwei Lin, Yingnong Dang, et al. 2022. UniParser: A Unified Log Parser for Heterogeneous Log Data. In Proceedings of the ACM Web Conference. 1893-1901." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "type": "text", + "content": "[138] Scott Lupton, Hironori Washizaki, Nobukazu Yoshioka, and Yoshiaki Fukazawa. 2021. Literature Review on Log Anomaly Detection Approaches Utilizing Online Parsing Methodology. In Proceedings of the 2021 28th Asia-Pacific Software Engineering Conference. 559-563. https://doi.org/10.1109/APSEC53868.2021.00068" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 147, + 441, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 147, + 441, + 176 + ], + "spans": [ + { + "bbox": [ + 44, + 147, + 441, + 176 + ], + "type": "text", + "content": "[139] Mingqi Lv, HongZhe Gao, Xuebo Qiu, Tieming Chen, Tiantian Zhu, Jinyin Chen, and Shouling Ji. 2024. TREC: APT Tactic/Technique Recognition via Few-Shot Provenance Subgraph Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 139-152." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "spans": [ + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "type": "text", + "content": "[140] Yang Lv, Shaona Qin, Zifeng Zhu, Zhuocheng Yu, Shudong Li, and Weihong Han. 2022. A Review of Provenance Graph based APT Attack Detection: Applications and Developments. In Proceedings of the 2022 7th IEEE International Conference on Data Science in Cyberspace. 498-505. https://doi.org/10.1109/DSC55868.2022.00075" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "spans": [ + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "type": "text", + "content": "[141] Shiqing Ma, Juan Zhai, Yonghwi Kwon, Kyu Hyung Lee, Xiangyu Zhang, Gabriela Ciocarlie, Ashish Gehani, Vinod Yegneswaran, Dongyan Xu, and Somesh Jha. 2018. Kernel-Supported Cost-Effective Audit Logging for Causality Tracking. In Proceedings of the 2018 USENIX Annual Technical Conference. 241-254." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 236, + 441, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 236, + 441, + 266 + ], + "spans": [ + { + "bbox": [ + 44, + 236, + 441, + 266 + ], + "type": "text", + "content": "[142] Shiqing Ma, Juan Zhai, Fei Wang, Kyu Hyung Lee, Xiangyu Zhang, and Dongyan Xu. 2017. MPI: Multiple Perspective Attack Investigation with Semantic Aware Execution Partitioning. In Proceedings of the 26th USENIX Security Symposium. 1111-1128." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "spans": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "text", + "content": "[143] Shiqing Ma, Xiangyu Zhang, and Dongyan Xu. 2016. ProTracer: Towards Practical Provenance Tracing by Alternating between Logging and Tainting. In Proceedings of the 23rd Annual Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 287, + 440, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 287, + 440, + 306 + ], + "spans": [ + { + "bbox": [ + 44, + 287, + 440, + 306 + ], + "type": "text", + "content": "[144] Pedro Manso, José Moura, and Carlos Serrão. 2019. SDN-Based Intrusion Detection System for Early Detection and Mitigation of DDoS Attacks. Information 10, 3 (2019), 106." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 306, + 441, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 306, + 441, + 336 + ], + "spans": [ + { + "bbox": [ + 44, + 306, + 441, + 336 + ], + "type": "text", + "content": "[145] Emaad Manzoor, Sadegh M Milajerdi, and Leman Akoglu. 2016. Fast Memory-Efficient Anomaly Detection in Streaming Heterogeneous Graphs. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1035-1044." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 336, + 440, + 366 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 336, + 440, + 366 + ], + "spans": [ + { + "bbox": [ + 44, + 336, + 440, + 366 + ], + "type": "text", + "content": "[146] Qinghua Mao, Xi Lin, Wenchao Xu, Yuxin Qi, Xiu Su, Gaolei Li, and Jianhua Li. 2025. FeCoGraph: Label-Aware Federated Graph Contrastive Learning for Few-Shot Network Intrusion Detection. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "spans": [ + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "type": "text", + "content": "[147] Yuyi Mao, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B Letaief. 2017. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Communications Surveys and Tutorials 19, 4 (2017), 2322-2358." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 386, + 441, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 386, + 441, + 406 + ], + "spans": [ + { + "bbox": [ + 44, + 386, + 441, + 406 + ], + "type": "text", + "content": "[148] Mitch Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building A Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics 19, 2 (1993), 313-330." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "type": "text", + "content": "[149] Ariana Martino, Michael Iannelli, and Coleen Truong. 2023. Knowledge Injection to Counter Large Language Model (LLM) Hallucination. In European Semantic Web Conference. Springer, 182-185." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 426, + 441, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 426, + 441, + 455 + ], + "spans": [ + { + "bbox": [ + 44, + 426, + 441, + 455 + ], + "type": "text", + "content": "[150] Ines Martins, Joao S Resende, Patricia R Sousa, Simao Silva, Luis Antunes, and Joao Gama. 2022. Host-based IDS: A Review and Open Issues of An Anomaly Detection System in IoT. Future Generation Computer Systems 133 (2022), 95-113." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 455, + 441, + 486 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 455, + 441, + 486 + ], + "spans": [ + { + "bbox": [ + 44, + 455, + 441, + 486 + ], + "type": "text", + "content": "[151] Weibin Meng, Ying Liu, Yichen Zhu, Shenglin Zhang, Dan Pei, Yuqing Liu, Yihao Chen, Ruizhi Zhang, Shimin Tao, Pei Sun, et al. 2019. LogAnomaly: Unsupervised Detection of Sequential and Quantitative Anomalies in Unstructured Logs. In Proceedings of the International Joint Conference on Artificial Intelligence, Vol. 19. 4739-4745." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 486, + 441, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 486, + 441, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 486, + 441, + 505 + ], + "type": "text", + "content": "[152] Noor Michael, Jaron Mink, Jason Liu, Sneha Gaur, Wajih Ul Hassan, and Adam Bates. 2020. On the Forensic Validity of Approximated Audit Logs. In Proceedings of the 36th Annual Computer Security Applications Conference. 189-202." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 505, + 441, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 505, + 441, + 525 + ], + "spans": [ + { + "bbox": [ + 44, + 505, + 441, + 525 + ], + "type": "text", + "content": "[153] Microsoft. [n.d]. Event Tracing - Win32 apps. https://learn.microsoft.com/en-us/windows/win32/etw/event-tracing-portal. 2020." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 525, + 441, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 525, + 441, + 545 + ], + "spans": [ + { + "bbox": [ + 44, + 525, + 441, + 545 + ], + "type": "text", + "content": "[154] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv preprint arXiv:1301.3781 (2013)." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 545, + 441, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 545, + 441, + 565 + ], + "spans": [ + { + "bbox": [ + 44, + 545, + 441, + 565 + ], + "type": "text", + "content": "[155] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and Their Compositionality. Advances in Neural Information Processing Systems 26 (2013)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "spans": [ + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "type": "text", + "content": "[156] Sadegh M Milajerdi, Birhanu Eshete, Rigel Gjomemo, and VN Venkatakrishnan. 2019. Poirot: Aligning Attack Behavior with Kernel Audit Records for Cyber Threat Hunting. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1795-1812." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "spans": [ + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "type": "text", + "content": "[157] Sadegh M Milajerdi, Rigel Gjomemo, Birhanu Eshete, Ramachandran Sekar, and VN Venkatakrishnan. 2019. Holmes: Real-time APT Detection through Correlation of Suspicious Information Flows. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. IEEE, 1137-1152." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "type": "text", + "content": "[158] Seyed Mohammad Mehdi Mirnajafizadeh, Ashwin Raam Sethuram, David Mohaisen, DaeHun Nyang, and Rhongho Jang. 2024. Enhancing Network Attack Detection with Distributed and In-Network Data Collection System. In Proceedings of the 33rd USENIX Security Symposium. 5161-5178." + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:33" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 32 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 655 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "text", + "content": "[159] Yisroel Mirsky, Tomer Doitshman, Yuval Elovici, and Asaf Shabtai. 2018. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. Proceedings of the Network and Distributed Systems Security Symposium (2018)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "spans": [ + { + "bbox": [ + 44, + 107, + 440, + 125 + ], + "type": "text", + "content": "[160] Kunal Mukherjee and Murat Kantarcioglu. 2025. LLM-driven Provenance Forensics for Threat Investigation and Detection. arXiv preprint arXiv:2508.21323 (2025)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 127, + 440, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 127, + 440, + 156 + ], + "spans": [ + { + "bbox": [ + 44, + 127, + 440, + 156 + ], + "type": "text", + "content": "[161] Kunal Mukherjee, Joshua Wiedemeier, Tianhao Wang, James Wei, Feng Chen, Muhyun Kim, Murat Kantarcioglu, and Kangkook Jee. 2023. Evading Provenance-Based ML Detectors with Adversarial System Actions. In Proceedings of the 32nd USENIX Security Symposium. 1199-1216." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 156, + 441, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 156, + 441, + 185 + ], + "spans": [ + { + "bbox": [ + 44, + 156, + 441, + 185 + ], + "type": "text", + "content": "[162] Muhammad Hassan Nasir, Salman A Khan, Muhammad Mubashir Khan, and Mahawish Fatima. 2022. Swarm Intelligence Inspired Intrusion Detection Systems—A Systematic Literature Review. Computer Networks 205 (2022), 108708." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 187, + 441, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 187, + 441, + 216 + ], + "spans": [ + { + "bbox": [ + 44, + 187, + 441, + 216 + ], + "type": "text", + "content": "[163] Mostafa Nassar, Nirmeen A El-Bahnasawy, HossamEl-Din H Ahmed, Adel A Saleeb, and Fathi E Abd El-Samie. 2019. Network Intrusion Detection, Literature Review and Some Techniques Comparison. In Proceedings of the 2019 15th International Computer Engineering Conference. IEEE, 62-71." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 217, + 440, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 217, + 440, + 236 + ], + "spans": [ + { + "bbox": [ + 44, + 217, + 440, + 236 + ], + "type": "text", + "content": "[164] Alexander Tobias Neumann, Yue Yin, Sulayman Sowe, Stefan Decker, and Matthias Jarke. 2024. An LLM-Driven Chatbot in Higher Education for Databases and Information Systems. IEEE Transactions on Education (2024)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 236, + 440, + 256 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 236, + 440, + 256 + ], + "spans": [ + { + "bbox": [ + 44, + 236, + 440, + 256 + ], + "type": "text", + "content": "[165] Zhibin Ni, Pan Fan, Shengzhuo Dai, Bo Zhang, Hai Wan, and Xibin Zhao. 2025. FG-CIBGC: A Unified Framework for Fine-Grained and Class-Incremental Behavior Graph Classification. In Proceedings of the Web Conference." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 257, + 440, + 285 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 257, + 440, + 285 + ], + "spans": [ + { + "bbox": [ + 44, + 257, + 440, + 285 + ], + "type": "text", + "content": "[166] Weina Niu, Zhenqi Yu, Zimu Li, Beibei Li, Runzi Zhang, and Xiaosong Zhang. 2022. LogTracer: Efficient Anomaly Tracing Combining System Log Detection and Provenance Graph. In Proceedings of the IEEE Global Communications Conference. IEEE, 3356-3361." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 286, + 440, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 286, + 440, + 306 + ], + "spans": [ + { + "bbox": [ + 44, + 286, + 440, + 306 + ], + "type": "text", + "content": "[167] Christine Nussbaum, Sascha Frühholz, and Stefan R Schweinberger. 2025. Understanding Voice Naturalness. Trends in Cognitive Sciences (2025)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 306, + 419, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 306, + 419, + 316 + ], + "spans": [ + { + "bbox": [ + 44, + 306, + 419, + 316 + ], + "type": "text", + "content": "[168] Connected Papers. 2020. Connected Papers: A Visual Tool for Researchers. https://wwwconnectedpapers.com" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "spans": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "text", + "content": "[169] Nohil Park, Heeseung Kim, Che Hyun Lee, Jooyoung Choi, Jiheum Yeom, and Sungroh Yoon. 2025. NanoVoice: Efficient Speaker-Adaptive Text-to-Speech for Multiple Speakers. In Proceedings of the ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 1-5." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 347, + 441, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 347, + 441, + 365 + ], + "spans": [ + { + "bbox": [ + 44, + 347, + 441, + 365 + ], + "type": "text", + "content": "[170] Thomas Pasquier, Xueyuan Han, Mark Goldstein, Thomas Moyer, David Eyers, Margo Seltzer, and Jean Bacon. 2017. Practical Whole-System Provenance Capture. In Proceedings of the 2017 Symposium on Cloud Computing. 405-418." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 366, + 333, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 366, + 333, + 376 + ], + "spans": [ + { + "bbox": [ + 44, + 366, + 333, + 376 + ], + "type": "text", + "content": "[171] Igor Pavlov. 2001. LZMA SDK (Software Development Kit). https://www.7-zip.org/" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "spans": [ + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "type": "text", + "content": "[172] Cheng Peng, Xi Yang, Aokun Chen, Kaleb E Smith, Nima PourNejatian, Anthony B Costa, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al. 2023. A Study of Generative Large Language Model For Medical Research and Healthcare. NPJ Digital Medicine 6, 1 (2023), 210." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "type": "text", + "content": "[173] Yihao Peng, Tongxin Zhang, Jieshao Lai, Yuxuan Zhang, Yiming Wu, Hai Wan, and Xibin Zhao. 2025. AutoLabel: Automated Fine-Grained Log Labeling for Cyber Attack Dataset Generation. In 34th USENIX Security Symposium (USENIX Security 25). 547-566." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 436, + 387, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 436, + 387, + 445 + ], + "spans": [ + { + "bbox": [ + 44, + 436, + 387, + 445 + ], + "type": "text", + "content": "[174] Prometheus. 2014. Prometheus - Monitoring System & Time Series Database. https://prometheus.io/" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "spans": [ + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "type": "text", + "content": "[175] Jiaxing Qi, Zhongzhi Luan, Shaohan Huang, Carol Fung, Hailong Yang, and Depei Qian. 2023. SpikeLog: Log-based Anomaly Detection via Potential-Assisted Spiking Neuron Network. IEEE Transactions on Knowledge and Data Engineering 36, 12 (2023), 9322-9335." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 475, + 440, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 475, + 440, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 475, + 440, + 505 + ], + "type": "text", + "content": "[176] Wei Qiao, Yebo Feng, Teng Li, Zhuo Ma, Yulong Shen, JianFeng Ma, and Yang Liu. 2025. Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning. In Proceedings of the 2025 on ACM SIGSAC Conference on Computer and Communications Security." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 506, + 327, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 506, + 327, + 515 + ], + "spans": [ + { + "bbox": [ + 44, + 506, + 327, + 515 + ], + "type": "text", + "content": "[177] QuickLZ. 2006. QuickLZ: Fastest Compression Library. http://wwwquicklz.com/" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 516, + 367, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 516, + 367, + 525 + ], + "spans": [ + { + "bbox": [ + 44, + 516, + 367, + 525 + ], + "type": "text", + "content": "[178] Alec Radford. 2018. Improving Language Understanding by Generative Pre-Training. (2018)." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 526, + 441, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 526, + 441, + 555 + ], + "spans": [ + { + "bbox": [ + 44, + 526, + 441, + 555 + ], + "type": "text", + "content": "[179] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with A Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1-67." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 555, + 441, + 583 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 555, + 441, + 583 + ], + "spans": [ + { + "bbox": [ + 44, + 555, + 441, + 583 + ], + "type": "text", + "content": "[180] Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. 2016. On the \"Naturalness\" of Buggy Code. In Proceedings of the 38th International Conference on Software Engineering. 428-439." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 44, + 584, + 441, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 584, + 441, + 604 + ], + "spans": [ + { + "bbox": [ + 44, + 584, + 441, + 604 + ], + "type": "text", + "content": "[181] Bace Rebecca and Peter Mell. 2001. Intrusion Detection Systems. National Institute of Standards and Technology, Special Publication (2001)." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 44, + 605, + 441, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 605, + 441, + 634 + ], + "spans": [ + { + "bbox": [ + 44, + 605, + 441, + 634 + ], + "type": "text", + "content": "[182] Mati Ur Rehman, Hadi Ahmadi, and Wajih Ul Hassan. 2024. FLASH: A Comprehensive Approach to Intrusion Detection via Provenance Graph Representation Learning. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE Computer Society, 139-139." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 635, + 440, + 655 + ], + "type": "text", + "content": "[183] Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. FastSpeech: Fast, Robust and Controllable Text to Speech. Advances in Neural Information Processing Systems 32 (2019)." + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 45, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:34" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 33 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 655 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 106 + ], + "type": "text", + "content": "[184] Andy Riddle, Kim Westfall, and Adam Bates. 2023. Atlasv2: Atlas attack engagements, version 2. arXiv preprint arXiv:2401.01341 (2023)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 106, + 440, + 125 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 106, + 440, + 125 + ], + "spans": [ + { + "bbox": [ + 44, + 106, + 440, + 125 + ], + "type": "text", + "content": "[185] Malajah Roberts, Jonathan Anderson, William Delgado, Richard Johnson, and Lawrence Spencer. 2024. Extending Contextual Length and World Knowledge Generalization in Large Language Models. (2024)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "text", + "content": "[186] Kirk Rodrigues, Yu Luo, and Ding Yuan. 2021. CLP: Efficient and Scalable Search on Compressed Text Logs. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. 183-198." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 147, + 441, + 166 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 147, + 441, + 166 + ], + "spans": [ + { + "bbox": [ + 44, + 147, + 441, + 166 + ], + "type": "text", + "content": "[187] Ronald Rosenfeld. 2000. Two Decades of Statistical Language Modeling: Where Do We Go from Here? Proceedings of the IEEE 88, 8 (2000), 1270-1278." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 167, + 441, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 167, + 441, + 186 + ], + "spans": [ + { + "bbox": [ + 44, + 167, + 441, + 186 + ], + "type": "text", + "content": "[188] Tejaswini S and Azra Nasreen. 2021. Survey on Online Log Parsing. Regular issue (2021). https://api-semanticscholar.org/CorpusID:236861650" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 187, + 365, + 197 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 187, + 365, + 197 + ], + "spans": [ + { + "bbox": [ + 44, + 187, + 365, + 197 + ], + "type": "text", + "content": "[189] Vijay Samuel. 2018. Monitoring Anything and Everything with Beats at eBay.(2018). (2018)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 197, + 339, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 197, + 339, + 206 + ], + "spans": [ + { + "bbox": [ + 44, + 197, + 339, + 206 + ], + "type": "text", + "content": "[190] Michael Schindler. 1999. SZIP Compression. http://www.compressconsult.com/szip/" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "spans": [ + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "type": "text", + "content": "[191] Frank Schwellinger. 2008. Ocamyd: A File (De-)Compressor Based on the DMC Algorithm. https://www.geocities.ws/ocamyd/" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 226, + 441, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 226, + 441, + 255 + ], + "spans": [ + { + "bbox": [ + 44, + 226, + 441, + 255 + ], + "type": "text", + "content": "[192] Issam Sedki, Abdelwahab Hamou-Lhadj, Otmane Ait-Mohamed, and Mohammed A Shehab. 2022. An Effective Approach for Parsing Large Log Files. In Proceedings of the 2022 IEEE International Conference on Software Maintenance and Evolution. IEEE, 1-12." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 257, + 441, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 257, + 441, + 276 + ], + "spans": [ + { + "bbox": [ + 44, + 257, + 441, + 276 + ], + "type": "text", + "content": "[193] R Sekar, Hanke Kimm, and Rohit Aich. 2024. eAudit: A Fast, Scalable and Deployable Audit Data Collection System. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE, 3571-3589." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 277, + 336, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 277, + 336, + 286 + ], + "spans": [ + { + "bbox": [ + 44, + 277, + 336, + 286 + ], + "type": "text", + "content": "[194] Julian Seward. 1996. bzip2: A High-Quality Data Compressor. http://www.bzip.org/" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 286, + 441, + 305 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 286, + 441, + 305 + ], + "spans": [ + { + "bbox": [ + 44, + 286, + 441, + 305 + ], + "type": "text", + "content": "[195] Claude E Shannon. 1948. A Mathematical Theory of Communication. The Bell System Technical Journal 27, 3 (1948), 379-423." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 306, + 441, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 306, + 441, + 325 + ], + "spans": [ + { + "bbox": [ + 44, + 306, + 441, + 325 + ], + "type": "text", + "content": "[196] Claude E Shannon. 1951. The Redundancy of English. In Cybernetics; Transactions of the 7th Conference, New York: Josiah Macy, Jr. Foundation. 248-272." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 326, + 441, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 326, + 441, + 354 + ], + "spans": [ + { + "bbox": [ + 44, + 326, + 441, + 354 + ], + "type": "text", + "content": "[197] Madhukar Shrestha, Yonghyun Kim, Jeehyun Oh, Junghwan Rhee, Yung Ryn Choe, Fei Zuo, Myungah Park, and Gang Qian. 2023. ProvSec: Open Cybersecurity System Provenance Analysis Benchmark Dataset with Labels. International Journal of Networked and Distributed Computing 11, 2 (2023), 112-123." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 356, + 441, + 375 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 356, + 441, + 375 + ], + "spans": [ + { + "bbox": [ + 44, + 356, + 441, + 375 + ], + "type": "text", + "content": "[198] Rakesh Shrestha, Atefeh Omidkar, Sajjad Ahmadi Roudi, Robert Abbas, and Shiho Kim. 2021. Machine-Learning-Enabled Intrusion Detection System for Cellular Connected UAV Networks. *Electronics* 10, 13 (2021), 1549." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 376, + 441, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 376, + 441, + 406 + ], + "spans": [ + { + "bbox": [ + 44, + 376, + 441, + 406 + ], + "type": "text", + "content": "[199] Zhuoxue Song, Ziming Zhao, Fan Zhang, Gang Xiong, Guang Cheng, Xinjie Zhao, Shize Guo, and Binbin Chen. 2022. I²RNN: An Incremental and Interpretable Recurrent Neural Network for Encrypted Traffic Classification. IEEE Transactions on Dependable and Secure Computing (2022)." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 406, + 441, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 441, + 444 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 441, + 444 + ], + "type": "text", + "content": "[200] Manolis Stamatogiannakis, Paul Groth, and Herbert Bos. 2015. Looking Inside the Black-Box: Capturing Data Provenance Using Dynamic Instrumentation. In Provenance and Annotation of Data and Processes: 5th International Provenance and Annotation Workshop, IPAW 2014, Cologne, Germany, June 9-13, 2014. Revised Selected Papers 5. Springer, 155-167." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 445, + 441, + 473 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 445, + 441, + 473 + ], + "spans": [ + { + "bbox": [ + 44, + 445, + 441, + 473 + ], + "type": "text", + "content": "[201] Branka Stojanovic, Katharina Hofer-Schmitz, and Ulrike Kleb. 2020. APT Datasets and Attack Modeling for Automated Detection Methods: A Review. Computer Security 92 (2020), 101734. https://apisemantic scholar.org/CorpusID:213320542" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "type": "text", + "content": "[202] Hongbin Sun, Su Wang, Zhiliang Wang, Zheyu Jiang, Dongqi Han, and Jiahai Yang. 2024. AudiTrim: A Real-time, General, Efficient, and Low-overhead Data Compaction System for Intrusion Detection. In Proceedings of the 27th International Symposium on Research in Attacks, Intrusions and Defenses. 263-277." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "spans": [ + { + "bbox": [ + 44, + 506, + 441, + 534 + ], + "type": "text", + "content": "[203] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntellicodeCompose: Code Generation Using Transformer. In Proceedings of the 28th ACM joint meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1433-1443." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "spans": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "text", + "content": "[204] Dan Tang, Yudong Yan, Chenjun Gao, Wei Liang, and Wenqiang Jin. 2023. LtRFT: Mitigate the Low-Rate Data Plane DDoS Attack with Learning-to-Rank Enabled Flow Tables. IEEE Transactions on Information Forensics and Security 18 (2023), 3143-3157." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "spans": [ + { + "bbox": [ + 44, + 565, + 441, + 595 + ], + "type": "text", + "content": "[205] Yutao Tang, Ding Li, Zhichun Li, Mu Zhang, Kangkook Jee, Xusheng Xiao, Zhenyu Wu, Junghwan Rhee, Fengyuan Xu, and Qun Li. 2018. NodeMerge: Template Based Efficient Data Reduction for Big-Data Causality Analysis. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 1324–1337." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 44, + 596, + 440, + 615 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 596, + 440, + 615 + ], + "spans": [ + { + "bbox": [ + 44, + 596, + 440, + 615 + ], + "type": "text", + "content": "[206] Joerg Thalheim, Pramod Bhatotia, and Christof Fetzer. 2016. Inspector: Data Provenance Using Intel Processor Trace (PT). In Proceedings of the 2016 IEEE 36th International Conference on Distributed Computing Systems. IEEE, 25-34." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 44, + 616, + 441, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 616, + 441, + 644 + ], + "spans": [ + { + "bbox": [ + 44, + 616, + 441, + 644 + ], + "type": "text", + "content": "[207] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language Models for Dialog Applications. arXiv preprint arXiv:2201.08239 (2022)." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 44, + 645, + 359, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 645, + 359, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 645, + 359, + 655 + ], + "type": "text", + "content": "[208] ThoughtWorks. 2004. Selenium RC. http://www.seleniumhq.org/projects/remote-control/" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:35" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 34 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 654 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "text", + "content": "[209] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971 (2023)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 117, + 261, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 117, + 261, + 126 + ], + "spans": [ + { + "bbox": [ + 44, + 117, + 261, + 126 + ], + "type": "text", + "content": "[210] Aqua Tracee. 2022. Runtime eBPF Threat Detection Engine." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 127, + 440, + 146 + ], + "type": "text", + "content": "[211] Devharsh Trivedi, Aymen Boudguiga, Nesrine Kaaniche, and Nikos Triandopoulos. 2023. SigML++: Supervised Log Anomaly with Probabilistic Polynomial Approximation. Cryptography 7, 4 (2023), 52." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 147, + 440, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 147, + 440, + 167 + ], + "spans": [ + { + "bbox": [ + 44, + 147, + 440, + 167 + ], + "type": "text", + "content": "[212] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Processing Systems 30 (2017)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 167, + 440, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 167, + 440, + 186 + ], + "spans": [ + { + "bbox": [ + 44, + 167, + 440, + 186 + ], + "type": "text", + "content": "[213] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. 2017. Graph Attention Networks. stat 1050, 20 (2017), 10-48550." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 187, + 440, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 187, + 440, + 205 + ], + "spans": [ + { + "bbox": [ + 44, + 187, + 440, + 205 + ], + "type": "text", + "content": "[214] Arthur Vervaet, Raja Chiky, and Mar Callau-Zori. 2021. USTEP: Unfixed Search Tree for Efficient Log Parsing. In Proceedings of the 2021 IEEE International Conference on Data Mining. IEEE, 659-668." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "spans": [ + { + "bbox": [ + 44, + 207, + 441, + 226 + ], + "type": "text", + "content": "[215] David Wagner and Paolo Soto. 2002. Mimicry Attacks on Host-Based Intrusion Detection Systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security. 255-264." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 226, + 441, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 226, + 441, + 257 + ], + "spans": [ + { + "bbox": [ + 44, + 226, + 441, + 257 + ], + "type": "text", + "content": "[216] Qi Wang, Wajih Ul Hassan, Ding Li, Kangkook Jee, Xiao Yu, Kexuan Zou, Junghwan Rhee, Zhengzhang Chen, Wei Cheng, Carl A Gunter, et al. 2020. You Are What You Do: Hunting Stealthy Malware via Data Provenance Analysis. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 257, + 441, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 257, + 441, + 286 + ], + "spans": [ + { + "bbox": [ + 44, + 257, + 441, + 286 + ], + "type": "text", + "content": "[217] Rui Wang, Devin Gibson, Kirk Rodrigues, Yu Luo, Yun Zhang, Kaibo Wang, Yupeng Fu, Ting Chen, and Ding Yuan. 2024. " + }, + { + "bbox": [ + 44, + 257, + 441, + 286 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 44, + 257, + 441, + 286 + ], + "type": "text", + "content": "Slope: High Compression and Fast Search on Semi-Structured Logs. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation. 529-544." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 287, + 441, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 287, + 441, + 315 + ], + "spans": [ + { + "bbox": [ + 44, + 287, + 441, + 315 + ], + "type": "text", + "content": "[218] Ruihua Wang, Yihao Peng, Yilun Sun, Xuancheng Zhang, Hai Wan, and Xibin Zhao. 2023. TeSec: Accurate Server-Side Attack Investigation for Web Applications. In Proceedings of the 2023 IEEE Symposium on Security and Privacy. IEEE, 2799-2816." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "spans": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "text", + "content": "[219] Su Wang, Zhiliang Wang, Tao Zhou, Hongbin Sun, Xia Yin, Dongqi Han, Han Zhang, Xingang Shi, and Jiahai Yang. 2022. threaTrace: Detecting and Tracing Host-Based Threats in Node Level Through Provenance Graph Learning. IEEE Transactions on Information Forensics and Security 17 (2022), 3972-3987." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 347, + 441, + 375 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 347, + 441, + 375 + ], + "spans": [ + { + "bbox": [ + 44, + 347, + 441, + 375 + ], + "type": "text", + "content": "[220] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent Abilities of Large Language Models. arXiv preprint arXiv:2206.07682 (2022)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "spans": [ + { + "bbox": [ + 44, + 376, + 441, + 405 + ], + "type": "text", + "content": "[221] Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, and Xiaofeng Zou. 2024. HEN: A Novel Hybrid Explainable Neural Network Based Framework for Robust Network Intrusion Detection. Science China Information Sciences 67, 7 (2024), 170304." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 441, + 435 + ], + "type": "text", + "content": "[222] Cong Wu, Jianfei Sun, Jing Chen, Mamoun Alazab, Yang Liu, and Yang Xiang. 2025. TCG-IDS: Robust Network Intrusion Detection via Temporal Contrastive Graph Learning. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 436, + 441, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 436, + 441, + 455 + ], + "spans": [ + { + "bbox": [ + 44, + 436, + 441, + 455 + ], + "type": "text", + "content": "[223] Weiheng Wu, Wei Qiao, Teng Li, Yebo Feng, Zhuo Ma, Jianfeng Ma, and Yang Liu. 2025. ProvX: Generating Counterfactual-Driven Attack Explanations for Provenance-Based Detection. arXiv preprint arXiv:2508.06073 (2025)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 456, + 441, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 456, + 441, + 485 + ], + "spans": [ + { + "bbox": [ + 44, + 456, + 441, + 485 + ], + "type": "text", + "content": "[224] Yafeng Wu, Yulai Xie, Xuelong Liao, Pan Zhou, Dan Feng, Lin Wu, Xuan Li, Avani Wildani, and Darrell Long. 2022. Paradise: Real-Time, Generalized, and Distributed Provenance-Based Intrusion Detection. IEEE Transactions on Dependable and Secure Computing 20, 2 (2022), 1624-1640." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 486, + 441, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 486, + 441, + 514 + ], + "spans": [ + { + "bbox": [ + 44, + 486, + 441, + 514 + ], + "type": "text", + "content": "[225] Yixuan Wu, Long Zhang, Lin Yang, Feng Yang, Linru Ma, Zhoumin Lu, and Wen Jiang. 2025. Intrusion Detection for Internet of Things: An Anchor Graph Clustering Approach. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 515, + 441, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 515, + 441, + 545 + ], + "spans": [ + { + "bbox": [ + 44, + 515, + 441, + 545 + ], + "type": "text", + "content": "[226] Tong Xiao, Zhe Quan, Zhi-Jie Wang, Kaiqi Zhao, Xiangke Liao, Huang Huang, Yunfei Du, and Kenli Li. 2023. LPV: A Log Parsing Framework Based on Vectorization. IEEE Transactions on Network and Service Management 20, 3 (2023), 2711-2725." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 545, + 441, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 545, + 441, + 575 + ], + "spans": [ + { + "bbox": [ + 44, + 545, + 441, + 575 + ], + "type": "text", + "content": "[227] Yulai Xie, Dan Feng, Yuchong Hu, Yan Li, Staunton Sample, and Darrell Long. 2018. Pagoda: A Hybrid Approach to Enable Efficient Real-Time Provenance Based Intrusion Detection in Big Data Environments. IEEE Transactions on Dependable and Secure Computing 17, 6 (2018), 1283-1296." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 576, + 441, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 576, + 441, + 595 + ], + "spans": [ + { + "bbox": [ + 44, + 576, + 441, + 595 + ], + "type": "text", + "content": "[228] Yulai Xie, Kiran-Kumar Muniswamy-Reddy, Darrell DE Long, Ahmed Amer, Dan Feng, and Zhipeng Tan. 2011. Compressing Provenance Graphs. In Proceedings of the 3rd USENIX Workshop on the Theory and Practice of Provenance." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "spans": [ + { + "bbox": [ + 44, + 596, + 441, + 624 + ], + "type": "text", + "content": "[229] Junjielong Xu, Qiuai Fu, Zhourui xing Zhu, Yutong Cheng, Zhijing Li, Yuchi Ma, and Pinjia He. 2023. Hue: A User-Adaptive Parser for Hybrid Logs. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 413-424." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 44, + 625, + 441, + 654 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 625, + 441, + 654 + ], + "spans": [ + { + "bbox": [ + 44, + 625, + 441, + 654 + ], + "type": "text", + "content": "[230] Jiacen Xu, Xiaokui Shu, and Zhou Li. 2024. Understanding and Bridging the Gap between Unsupervised Network Representation Learning and Security Analytics. In Proceedings of the 2024 IEEE Symposium on Security and Privacy. IEEE, 3590-3608." + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:36" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 35 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 655 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "text", + "content": "[231] Wei Xu, Ling Huang, Armando Fox, David Patterson, and Michael I Jordan. 2009. Detecting Large-scale System Problems by Mining Console Logs. In Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles. 117-132." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 115, + 441, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 115, + 441, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 115, + 441, + 146 + ], + "type": "text", + "content": "[232] Zhiqiang Xu, Pengcheng Fang, Changlin Liu, Xusheng Xiao, Yu Wen, and Dan Meng. 2022. DepComm: Graph Summarization on System Audit Logs for Attack Investigation. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 540-557." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "spans": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "text", + "content": "[233] Zhiwei Xu, Shaohua Qiang, Dinghong Song, Min Zhou, Hai Wan, Xibin Zhao, Ping Luo, and Hongyu Zhang. 2024. DSFM: Enhancing Functional Code Clone Detection with Deep Subtree Interactions. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. 1-12." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 177, + 440, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 177, + 440, + 205 + ], + "spans": [ + { + "bbox": [ + 44, + 177, + 440, + 205 + ], + "type": "text", + "content": "[234] Zhang Xu, Zhenyu Wu, Zhichun Li, Kangkook Jee, Junghwan Rhee, Xusheng Xiao, Fengyuan Xu, Haining Wang, and Guofei Jiang. 2016. High Fidelity Data Reduction for Big Data Security Dexterity Analyses. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 504-516." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "spans": [ + { + "bbox": [ + 44, + 206, + 440, + 236 + ], + "type": "text", + "content": "[235] Zhiwei Xu, Min Zhou, Xibin Zhao, Yang Chen, Xi Cheng, and Hongyu Zhang. 2023. xASTNN: Improved Code Representations for Industrial Practice. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1727-1738." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 236, + 440, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 236, + 440, + 266 + ], + "spans": [ + { + "bbox": [ + 44, + 236, + 440, + 266 + ], + "type": "text", + "content": "[236] Yu Xue, Bernard-marie Onzo, and Ferrante Neri. 2021. Intrusion Detection System Based on an Updated ANN Model. In Advances in Swarm Intelligence: 12th International Conference, ICSI 2021, Qingdao, China, July 17-21, 2021, Proceedings, Part II 12. Springer, 472-479." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "spans": [ + { + "bbox": [ + 44, + 267, + 440, + 286 + ], + "type": "text", + "content": "[237] Fan Yang, Jiacen Xu, Chunlin Xiong, Zhou Li, and Kehuan Zhang. 2023. ProGrapher: An Anomaly Detection System based on Provenance Graph Embedding. In Proceedings of the 32nd USENIX Security Symposium. 4355-4372." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "spans": [ + { + "bbox": [ + 44, + 287, + 441, + 316 + ], + "type": "text", + "content": "[238] Lin Yang, Junjie Chen, Zan Wang, Weijing Wang, Jiajun Jiang, Xuyuan Dong, and Wenbin Zhang. 2021. Semi-Supervised Log-Based Anomaly Detection via Probabilistic Label Estimation. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 1448-1460." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "spans": [ + { + "bbox": [ + 44, + 316, + 441, + 346 + ], + "type": "text", + "content": "[239] Runqing Yang, Shiqing Ma, Haitao Xu, Xiangyu Zhang, and Yan Chen. 2020. UIScope: Accurate, Instrumentation-free, and Visible Attack Investigation for GUI Applications. In Proceedings of the Network and Distributed Systems Security Symposium." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "spans": [ + { + "bbox": [ + 44, + 346, + 440, + 365 + ], + "type": "text", + "content": "[240] Zhaohui Yang, Wei Xu, Le Liang, Yuanhao Cui, Zhijin Qin, and Mérouane Debbah. 2025. On Privacy, Security, and Trustworthiness in Distributed Wireless Large AI Models. Science China Information Sciences 68, 7 (2025), 1-15." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "spans": [ + { + "bbox": [ + 44, + 366, + 440, + 385 + ], + "type": "text", + "content": "[241] Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Yu-Yang Liu, and Li Yuan. 2023. LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. arXiv preprint arXiv:2310.01469 (2023)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 386, + 440, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 386, + 440, + 406 + ], + "spans": [ + { + "bbox": [ + 44, + 386, + 440, + 406 + ], + "type": "text", + "content": "[242] Kundi Yao, Heng Li, Weiyi Shang, and Ahmed E Hassan. 2020. A Study of the Performance of General Compressors on Log Files. Empirical Software Engineering 25 (2020), 3043-3085." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 440, + 426 + ], + "type": "text", + "content": "[243] Kundi Yao, Mohammed Sayagh, Weiyi Shang, and Ahmed E Hassan. 2021. Improving State-of-the-Art Compression Techniques for Log Management Tools. IEEE Transactions on Software Engineering 48, 8 (2021), 2748-2760." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 426, + 440, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 426, + 440, + 446 + ], + "spans": [ + { + "bbox": [ + 44, + 426, + 440, + 446 + ], + "type": "text", + "content": "[244] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. 2024. A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. *High-Confidence Computing* (2024), 100211." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "spans": [ + { + "bbox": [ + 44, + 446, + 441, + 475 + ], + "type": "text", + "content": "[245] Heng Yin, Dawn Song, Manuel Egele, Christopher Kruegel, and Engin Kirda. 2007. Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 116-127." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "spans": [ + { + "bbox": [ + 44, + 475, + 441, + 505 + ], + "type": "text", + "content": "[246] Kun Yin, Meng Yan, Ling Xu, Zhou Xu, Zhao Li, Dan Yang, and Xiaohong Zhang. 2020. Improving Log-Based Anomaly Detection with Component-Aware Analysis. In Proceedings of the 2020 IEEE International Conference on Software Maintenance and Evolution. IEEE, 667-671." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 505, + 441, + 535 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 505, + 441, + 535 + ], + "spans": [ + { + "bbox": [ + 44, + 505, + 441, + 535 + ], + "type": "text", + "content": "[247] Guangba Yu, Pengfei Chen, Pairui Li, Tianjun Weng, Haibing Zheng, Yuetang Deng, and Zibin Zheng. 2023. LogReducer: Identify and Reduce Log Hotspots in Kernel on the Fly. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering. IEEE, 1763-1775." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "spans": [ + { + "bbox": [ + 44, + 535, + 441, + 565 + ], + "type": "text", + "content": "[248] Le Yu, Shiqing Ma, Zhuo Zhang, Guanhong Tao, Xiangyu Zhang, Dongyan Xu, Vincent E Urias, Han Wei Lin, Gabriela F Ciocarlie, Vinod Yegneswaran, et al. 2021. ALchemist: Fusing Application and Audit Logs for Precise Attack Provenance without Instrumentation. In Proceedings of the Network and Distributed System Security Symposium." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "spans": [ + { + "bbox": [ + 44, + 565, + 441, + 594 + ], + "type": "text", + "content": "[249] Siyu Yu, Yifan Wu, Ying Li, and Pinjia He. 2024. Unlocking the Power of Numbers: Log Compression via Numeric Token Parsing. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 919-930." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "spans": [ + { + "bbox": [ + 44, + 595, + 441, + 624 + ], + "type": "text", + "content": "[250] Jun Zengy, Xiang Wang, Jiahao Liu, Yinfang Chen, Zhenkai Liang, Tat-Seng Chua, and Zheng Leong Chua. 2022. ShadeWatcher: Recommendation-Guided Cyber Threat Analysis Using System Audit Records. In Proceedings of the 2022 IEEE Symposium on Security and Privacy. IEEE, 489-506." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "spans": [ + { + "bbox": [ + 44, + 625, + 441, + 655 + ], + "type": "text", + "content": "[251] Chao Zha, Zhiyu Wang, Yifei Fan, Bing Bai, Yinjie Zhang, Sainan Shi, and Ruyun Zhang. 2025. A-NIDS: Adaptive Network Intrusion Detection System based on Clustering and Stacked CTGAN. IEEE Transactions on Information Forensics and Security (2025)." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "spans": [ + { + "bbox": [ + 44, + 60, + 244, + 69 + ], + "type": "text", + "content": "Deep Learning-based Intrusion Detection Systems: A Survey" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "spans": [ + { + "bbox": [ + 426, + 61, + 440, + 68 + ], + "type": "text", + "content": "1:37" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "spans": [ + { + "bbox": [ + 233, + 672, + 440, + 682 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 36 + }, + { + "para_blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 633 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "spans": [ + { + "bbox": [ + 44, + 86, + 441, + 115 + ], + "type": "text", + "content": "[252] Bo Zhang, Yansong Gao, Changlong Yu, Boyu Kuang, Zhi Zhang, Hyoungshick Kim, and Anmin Fu. 2025. TAPAS: An Efficient Online APT Detection with Task-guided Process Provenance Graph Segmentation and Analysis. In Proceedings of the USENIX Security Symposium. 607-624." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "spans": [ + { + "bbox": [ + 44, + 116, + 441, + 146 + ], + "type": "text", + "content": "[253] Pei Zhang, Fangzhou He, Han Zhang, Jiankun Hu, Xiaohong Huang, Jilong Wang, Xia Yin, Huahong Zhu, and Yahui Li. 2023. Real-Time Malicious Traffic Detection with Online Isolation Forest over SD-WAN. IEEE Transactions on Information Forensics and Security 18 (2023), 2076-2090." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "spans": [ + { + "bbox": [ + 44, + 146, + 441, + 176 + ], + "type": "text", + "content": "[254] Shenglin Zhang, Yuhe Ji, Jiaqi Luan, Xiaohui Nie, Ziang Chen, Minghua Ma, Yongqian Sun, and Dan Pei. 2024. End-to-End Automl for Unsupervised Log Anomaly Detection. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering. 1680–1692." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "spans": [ + { + "bbox": [ + 44, + 177, + 441, + 205 + ], + "type": "text", + "content": "[255] Tianzhu Zhang, Han Qiu, Gabriele Castellano, Myriana Rifai, Chung Shue Chen, and Fabio Pianese. 2023. System Log Parsing: A Survey. IEEE Transactions on Knowledge and Data Engineering 35, 8 (2023), 8596-8614. https://doi.org/10.1109/TKDE.2022.3222417" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 44, + 206, + 440, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 206, + 440, + 226 + ], + "spans": [ + { + "bbox": [ + 44, + 206, + 440, + 226 + ], + "type": "text", + "content": "[256] Tianye Zhang, Xumeng Wang, Zongzhuang Li, Fangzhou Guo, Yuxin Ma, and Wei Chen. 2017. A Survey of Network Anomaly Visualization. Science China Information Sciences 60, 12 (2017), 121101." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 44, + 226, + 441, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 226, + 441, + 266 + ], + "spans": [ + { + "bbox": [ + 44, + 226, + 441, + 266 + ], + "type": "text", + "content": "[257] Xu Zhang, Yong Xu, Qingwei Lin, Bo Qiao, Hongyu Zhang, Yingnong Dang, Chunyu Xie, Xinsheng Yang, Qian Cheng, Ze Li, et al. 2019. Robust Log-Based Anomaly Detection on Unstable Log Data. In Proceedings of the 2019 27th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering. 807-817." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 44, + 267, + 441, + 295 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 267, + 441, + 295 + ], + "spans": [ + { + "bbox": [ + 44, + 267, + 441, + 295 + ], + "type": "text", + "content": "[258] Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, et al. 2024. Revolutionizing Finance with LLMs: An Overview of Applications and Insights. arXiv preprint arXiv:2401.11641 (2024)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 44, + 296, + 441, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 296, + 441, + 326 + ], + "spans": [ + { + "bbox": [ + 44, + 296, + 441, + 326 + ], + "type": "text", + "content": "[259] Jianjin Zhao, Qi Li, Zewei Han, Junsong Fu, Guoshun Nan, Meng Shen, and Bharat K Bhargava. 2024. ReTrial: Robust Encrypted Malicious Traffic Detection via Discriminative Relation Incorporation and Misleading Relation Correction. IEEE Transactions on Information Forensics and Security (2024)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 44, + 326, + 441, + 355 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 326, + 441, + 355 + ], + "spans": [ + { + "bbox": [ + 44, + 326, + 441, + 355 + ], + "type": "text", + "content": "[260] Ruijie Zhao, Xianwen Deng, Zhicong Yan, Jun Ma, Zhi Xue, and Yijun Wang. 2022. MT-FlowFormer: A Semi-Supervised Flow Transformer for Encrypted Traffic Classification. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2576-2584." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 44, + 356, + 440, + 375 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 356, + 440, + 375 + ], + "spans": [ + { + "bbox": [ + 44, + 356, + 440, + 375 + ], + "type": "text", + "content": "[261] Ying Zhao, FangFang Zhou, XiaoPing Fan, Xing Liang, and YongGang Liu. 2013. IDSRadar: A Real-Time Visualization Framework for IDS Alerts. Science China Information Sciences 56, 8 (2013), 1-12." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 44, + 375, + 441, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 375, + 441, + 406 + ], + "spans": [ + { + "bbox": [ + 44, + 375, + 441, + 406 + ], + "type": "text", + "content": "[262] Ziming Zhao, Zhaoxuan Li, Jialun Jiang, Fengyuan Yu, Fan Zhang, Congyuan Xu, Xinjie Zhao, Rui Zhang, and Shize Guo. 2022. ERNN: Error-Resilient RNN for Encrypted Traffic Detection Towards Network-Induced Phenomena. IEEE Transactions on Dependable and Secure Computing (2022)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 44, + 406, + 441, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 406, + 441, + 434 + ], + "spans": [ + { + "bbox": [ + 44, + 406, + 441, + 434 + ], + "type": "text", + "content": "[263] Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhuoxue Song, and Zhaoxuan Li. 2024. Effective DDoS Mitigation via ML-Driven In-Network Traffic Shaping. IEEE Transactions on Dependable and Secure Computing 21, 4 (2024), 4271-4289." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 44, + 435, + 441, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 435, + 441, + 465 + ], + "spans": [ + { + "bbox": [ + 44, + 435, + 441, + 465 + ], + "type": "text", + "content": "[264] Ying Zhong, Zhiliang Wang, Xingang Shi, Jiahai Yang, and Keqin Li. 2024. RFG-HELAD: A Robust Fine-Grained Network Traffic Anomaly Detection Model Based on Heterogeneous Ensemble Learning. IEEE Transactions on Information Forensics and Security (2024)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 44, + 465, + 441, + 495 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 465, + 441, + 495 + ], + "spans": [ + { + "bbox": [ + 44, + 465, + 441, + 495 + ], + "type": "text", + "content": "[265] Junwei Zhou, Shaowen Ying, Shulan Wang, Dongdong Zhao, Jianwen Xiang, Kaitai Liang, and Peng Liu. 2025. LogDLR: Unsupervised Cross-System Log Anomaly Detection Through Domain-Invariant Latent Representation. IEEE Transactions on Dependable and Secure Computing (2025)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 44, + 496, + 441, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 496, + 441, + 525 + ], + "spans": [ + { + "bbox": [ + 44, + 496, + 441, + 525 + ], + "type": "text", + "content": "[266] Jieming Zhu, Shilin He, Pinjia He, Jinyang Liu, and Michael R Lyu. 2023. Loghub: A Large Collection of System Log Datasets for AI-Driven Log Analytics. In Proceedings of the 2023 IEEE 34th International Symposium on Software Reliability Engineering. IEEE, 355-366." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 44, + 525, + 441, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 525, + 441, + 555 + ], + "spans": [ + { + "bbox": [ + 44, + 525, + 441, + 555 + ], + "type": "text", + "content": "[267] Tiantian Zhu, Jiayu Wang, Linqi Ruan, Chunlin Xiong, Jinkai Yu, Yaosheng Li, Yan Chen, Mingqi Lv, and Tieming Chen. 2021. General, Efficient, and Real-Time Data Compaction Strategy for APT Forensic Analysis. IEEE Transactions on Information Forensics and Security 16 (2021), 3312-3325." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 44, + 555, + 441, + 584 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 555, + 441, + 584 + ], + "spans": [ + { + "bbox": [ + 44, + 555, + 441, + 584 + ], + "type": "text", + "content": "[268] Tiantian Zhu, Jinkai Yu, Chunlin Xiong, Wenrui Cheng, Qixuan Yuan, Jie Ying, Tieming Chen, Jiabo Zhang, Mingqi Lv, Yan Chen, et al. 2023. APTSHIELD: A Stable, Efficient and Real-time APT Detection System for Linux Hosts. IEEE Transactions on Dependable and Secure Computing 20, 6 (2023), 5247-5264." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 44, + 585, + 441, + 605 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 585, + 441, + 605 + ], + "spans": [ + { + "bbox": [ + 44, + 585, + 441, + 605 + ], + "type": "text", + "content": "[269] Yao Zhu, LI Zhenyuan, Yangyang Wei, and Shouling Ji. 2025. The Case for Learned Provenance-based System Behavior Baseline. In Forty-second International Conference on Machine Learning." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 44, + 605, + 441, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 605, + 441, + 633 + ], + "spans": [ + { + "bbox": [ + 44, + 605, + 441, + 633 + ], + "type": "text", + "content": "[270] Michael Zipperle, Florian Gottwalt, Elizabeth Chang, and Tharam S. Dillon. 2022. Provenance-based Intrusion Detection Systems: A Survey. ACM Computing Surveys 55 (2022), 1 - 36. https://api-semanticscholar.org/CorpusID:249579087" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "spans": [ + { + "bbox": [ + 44, + 61, + 58, + 68 + ], + "type": "text", + "content": "1:38" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "spans": [ + { + "bbox": [ + 115, + 60, + 441, + 69 + ], + "type": "text", + "content": "Zhiwei Xu, Yujuan Wu, Shiheng Wang, Jiabao Gao, Tian Qiu, Ziqi Wang, Hai Wan, and Xibin Zhao" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "spans": [ + { + "bbox": [ + 42, + 672, + 249, + 681 + ], + "type": "text", + "content": "J. ACM, Vol. 1, No. 1, Article 1. Publication date: October 2025." + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 486, + 720 + ], + "page_idx": 37 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_content_list.json b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2465823f28b24e49bc2259fba527d84ac0888370 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_content_list.json @@ -0,0 +1,2318 @@ +[ + { + "type": "text", + "text": "PANGU ULTRA: PUSHING THE LIMITS OF DENSE LARGE LANGUAGE MODELS ON ASCEND NPUS", + "text_level": 1, + "bbox": [ + 160, + 102, + 836, + 150 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Pangu Team, Huawei", + "bbox": [ + 426, + 179, + 571, + 193 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "PanguTech@huawei.com", + "bbox": [ + 411, + 205, + 583, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ABSTRACT", + "text_level": 1, + "bbox": [ + 449, + 263, + 547, + 277 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We present Pangu Ultra, a Large Language Model (LLM) with 135 billion parameters and dense Transformer modules trained on Ascend Neural Processing Units (NPUs). Although the field of LLM has been witnessing unprecedented advances in pushing the scale and capability of LLM in recent years, training such a large-scale model still involves significant optimization and system challenges. To stabilize the training process, we propose depth-scaled sandwich normalization, which effectively eliminates loss spikes during the training process of deep models. We pre-train our model on 13.2 trillion diverse and high-quality tokens and further enhance its reasoning capabilities during post-training. To perform such large-scale training efficiently, we utilize 8,192 Ascend NPUs with a series of system optimizations. Evaluations on multiple diverse benchmarks indicate that Pangu Ultra significantly advances the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse model structure contains much more parameters. Our exploration demonstrates that Ascend NPUs are capable of efficiently and effectively training dense models with more than 100 billion parameters. Our model and system will be available for our commercial customers.", + "bbox": [ + 199, + 292, + 795, + 501 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 142, + 520, + 284, + 536 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large Language Models (LLMs) have transformed the landscape and our understanding of Artificial Intelligence. Their remarkable capabilities are enabling more and more AI applications, bringing numerous commercial opportunities. Unsurprisingly, teams are racing to push the scaling law to create models with more and more parameters. Although the Transformer [68] structure is a popular choice for large models, it is still debatable whether the models should be sparse or dense. With more than 100 billion parameters, sparse architectures powered by Mixture of Experts (MoE), such as DeepSeek [46, 19], have demonstrated surreal human-like language and thinking abilities [36], which makes sparse models a popular choice when pushing the limit of LLMs.", + "bbox": [ + 140, + 551, + 854, + 662 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "At the same time, dense models, such as the Qwen [11, 72], Llama [25], and Gemma [67] series, are currently popular among models with fewer than 100 billion parameters thanks to their strong performance in specific skills and ease of deployment. The parameters in dense models are usually easier to optimize, while the dynamic components in sparse models usually need to turn to additional heuristics for stable training. In addition, the dense model structures at inference time make it easier to optimize system performance due to deterministic parameter usage. In this study, we aim to further explore the potential of dense models at large scales and show the performance of dense models can be on par with state-of-the-art MoE models on diverse tasks.", + "bbox": [ + 140, + 669, + 854, + 780 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The numbers of model parameters and layers are two crucial dimensions to release the full potential of dense models. While model parameter count is critical for model performance and plays a central role in scaling laws [38], recent studies [73, 50] suggest that model depth has a significant impact on reasoning capabilities. However, our exploration in those two aspects poses significant challenges in exploring the limits of those two aspects. Deeper models usually introduce unstable training, manifested as spikes in training loss curves. Experimental observations suggest that those spikes can knock our model out of the ideal parameter landscape and cause irreparable damage to the training process. Meanwhile, training hundreds of billions of parameters in dense models requires orchestrating thousands of AI processors, which poses significant system efficiency challenges.", + "bbox": [ + 140, + 786, + 854, + 912 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Pangu", + "bbox": [ + 145, + 39, + 230, + 64 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "TECHNICAL REPORT", + "bbox": [ + 720, + 55, + 852, + 68 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.07866v2 [cs.CL] 11 Apr 2025", + "bbox": [ + 22, + 276, + 60, + 717 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "For our exploration, we introduce Pangu Ultra, a dense Transformer architecture with 135 billion parameters and 94 layers. The model setup is at the forefront scale of the top performing dense models [11, 72, 25, 67]. Regarding challenges of training deep models, we hypothesize that the loss spikes are due to gradient fluctuations, which in turn hinder convergence rates and may lead to training divergence. Therefore, we propose two techniques, the depth-scaled sandwich norm and tiny initialization, both of which are designed to maintain stable gradient norms. Specifically, we first replace pre-layer norm [47] with the sandwich norm [20] and scaled initialization values in the post-layer normalization based on the model's depth. This depth-based adjustment helps control the range of gradient fluctuations effectively. In addition, we scale the standard deviation of weight initialization according to the model's width and depth, leading to tiny initialization. These two techniques lead to more stable gradients throughout the training process, eliminating loss spikes during the training of Pangu Ultra, and improving overall model performance.", + "bbox": [ + 143, + 90, + 854, + 243 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In practice, we pre-train Pangu Ultra on 13.2 trillion tokens of our built corpus. In the pre-training stage, we use three phrases of data corpus each with a distinct data recipe. The design principles behind three phrases are first to help the model develop knowledge and linguistics, and then to directly equip it with reasoning ability, and finally to boost it on actively learning to reason. The model context window is gradually extended from 4K to 128K. In the post-training stage, we begin with applying efficient supervised fine-tuning (SFT) for a cold start, utilizing a carefully curated set of instruction data. Following this, Pangu Ultra undergoes further optimization through Reinforcement Learning (RL). The overall training of Pangu Ultra is stable in this process.", + "bbox": [ + 143, + 250, + 854, + 359 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To handle large-scale model training of more than 100 billion parameters, we utilize a large-scale computing cluster consisting of 8,192 Ascend NPUs and employ a series of system optimization to improve the system efficiency. The primary challenge is minimizing pipeline bubbles [29] at large scales, which arise due to batch size constraints [35]. We take advantage of the typical 4 types of parallelism on our Ascend cluster, that is, Data Parallelism (DP), Tensor Parallelism (TP) [63], Sequence Parallelsim [39] and Pipeline Parallelism (PP) [30, 51]. As the training cluster scales up, the mini-batch size allocated to each DP decreases, leading to an increased pipeline bubble ratio. To mitigate this issue, we employ additional virtual pipeline (VPP) scheduling [52] with fine-grained tuning to ensure load balancing and reduce the PP bubble ratio from $30.45\\%$ to $6.8\\%$ . The second challenge is to achieve high training efficiency for long sequences. Both attention mask generation and self-attention computation are time- and memory-intensive, particularly for long contexts. We utilize a NPU Fusion Attention (NFA) operator [4, 18, 17] tailored for the Ascend NPUs, which supports reset attention mask scenarios and eliminates the need to construct the attention mask before calling the NFA, thus improving computational efficiency and reducing memory cost. Under the implementation of several fine-grained system optimization, we achieve a Model FLOPs Utilization (MFU) [14] of over $50\\%$ when training Pangu Ultra on 8,192 Ascend NPUs.", + "bbox": [ + 143, + 366, + 854, + 574 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "On public evaluation benchmarks, Pangu Ultra outperforms existing dense LLMs including Llama 405B and Mistral Large 2 123B on almost all major language tasks, and achieves competitive results with sparse models consisting of more than 500 billion parameters. These results indicate the potential of dense model capabilities is still promising to explore. Pangu Ultra also demonstrates that the Ascend NPUs are suitable for exploring the full capabilities of large-scale dense language models.", + "bbox": [ + 143, + 580, + 854, + 650 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Model Architecture", + "text_level": 1, + "bbox": [ + 143, + 672, + 339, + 688 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The basic architecture of Pangu Ultra is similar to Llama 3 [25]. It has 135 billion parameters with a hidden dimension of 12,288, a SwiGLU [60] feed-forward network (FFN) intermediate size of 28,672, and 94 layers. The attention blocks in Pangu Ultra leverage Group Query Attention (GQA) to reduce KV-cache size by incorporating 96 query heads and 8 KV heads.", + "bbox": [ + 143, + 705, + 854, + 761 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "There are two crucial differences to address the fundamental challenges of training stability and convergence in large dense LLMs. We propose Depth-Scaled Sandwich-Norm to replace the layer normalization and TinyInit for parameter initialization. By integrating these techniques, Pangu Ultra achieves substantial improvements over previous dense models.", + "bbox": [ + 143, + 767, + 854, + 823 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Depth-Scaled Sandwich-Norm", + "text_level": 1, + "bbox": [ + 143, + 842, + 393, + 857 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Large-scale dense models typically adopt deeper architectures [22], although MoE models usually scale in width [19]. However, increased depth introduces greater challenges in maintaining training stability. Given the prohibitive cost of pre-training, stable training of large dense LLMs becomes paramount. Pre-Layer", + "bbox": [ + 143, + 869, + 854, + 911 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 936, + 503, + 946 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Normalization (Pre-LN) has been found to make back-propagation more efficient for deep Transformers [69], leading to its widespread adoption in Transformer-based large language model (LLM) architectures [22, 11, 19].", + "bbox": [ + 140, + 90, + 854, + 133 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "However, in models employing the pre-LN structure, the fluctuating output scale of each sub-layer can easily lead to training instability [66]. To address this issue, sandwich-norm [20] applies an layer normalization to each sub-layer's output prior to the residual connection. While the sandwich-norm maintains the scale stability of individual sub-layer outputs, the progressive accumulation of output norms via residual connections across multiple layers may nevertheless lead to training instability.", + "bbox": [ + 140, + 138, + 854, + 210 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To mitigate this, we present the depth-scaled sandwich norm, which integrates the sandwich norm with a depth-scaled initialization scheme. The layer normalization regulates layer-wise output magnitudes through trainable gamma parameters, which are initialized with values scaled proportionally to the inverse of network depth. Figure 1 illustrates the differences between the depth-scaled sandwich-norm and pre-norm architectures. The formula of depth-scaled sandwich-norm is", + "bbox": [ + 140, + 215, + 854, + 287 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\text {a t t n}}, \\operatorname {A T T N} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\text {a t t n}} = \\frac {c _ {\\text {a t t n}}}{\\sqrt {L}}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 307, + 303, + 854, + 337 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\mathrm {m l p}}, \\operatorname {M L P} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\mathrm {m l p}} = \\frac {c _ {\\mathrm {m l p}}}{\\sqrt {L}},\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 333, + 679, + 363 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $L$ is the number of layers, $c_{\\mathrm{attn}}$ and $c_{\\mathrm{mlp}}$ are set as the initial output standard deviations of the attention layer and feed-forward network (FFN) layer, respectively. For Pangu Ultra, we set $c_{\\mathrm{attn}}$ to 0.283 and $c_{\\mathrm{mlp}}$ to 0.432.", + "bbox": [ + 140, + 375, + 854, + 417 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg", + "image_caption": [ + "Figure 1: Structure comparison between Pre-Layer Norm (Pre-LN) and Depth-Scaled Sandwich-Norm (DSSN). DSSN applies normalization layers to both before and after the attention and FFN block, while Pre-LN only utilizes one normalization layer. DSSN also employs a depth-scaled initialization schema, which is not in the original sandwich norm." + ], + "image_footnote": [], + "bbox": [ + 245, + 438, + 400, + 612 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 470, + 438, + 774, + 611 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Model Initialization", + "text_level": 1, + "bbox": [ + 142, + 710, + 321, + 724 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Existing works [53] observe that model initialization plays a crucial role in training stability and performance. Transformer-based LLMs widely adopt small initialization[53], which initialize all the weight with a normal distribution of standard deviation $\\sqrt{\\frac{2}{5d}}$ , where $d$ is the hidden dimension. It's also common practice to scale the weights of residual layers at initialization by a factor of $1 / \\sqrt{L}$ [57], where $L$ is the number of layers.", + "bbox": [ + 140, + 736, + 854, + 808 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our findings suggest that scaling initialization by both model depth and width, using $\\sqrt{\\frac{1}{2dL}}$ , leads to faster loss convergence and improved performance on downstream tasks. We call this initialization method TinyInit. We hypothesize that TinyInit achieves more consistent parameter scales across the model, which may facilitate optimization and convergence.", + "bbox": [ + 140, + 813, + 854, + 878 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Research [66] indicates that embedding layers require different initialization strategies compared to other layers. Specifically, maintaining the standard deviation of embedding weights close to 1 may enhance training", + "bbox": [ + 140, + 883, + 854, + 912 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "stability. Our experimental results indicate that initializing with a standard deviation of 0.5 achieves good model performance.", + "bbox": [ + 140, + 90, + 854, + 122 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.3 Tokenizer", + "text_level": 1, + "bbox": [ + 142, + 136, + 253, + 150 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The design of the tokenizer significantly impacts model performance. An optimal vocabulary balances domain coverage (handling diverse tasks such as text, math, and code) with efficiency (encoding data with fewer tokens). Common methods use Byte-Pair Encoding (BPE) [62] and SentencePiece [40] build vocabularies by directly computing word frequencies across the entire training dataset. However, this approach suffers from domain imbalance, as common domains such as general text dominate the vocabulary, while specialized domains such as math and code remain underrepresented due to their limited data volume.", + "bbox": [ + 140, + 162, + 854, + 247 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Pangu Ultra adopts a domain-aware vocabulary strategy. We perform independent frequency analyses across multiple domains including general Chinese, general English, code, and mathematics, generating distinct domain-specific vocabularies. These vocabularies are then merged and de-duplicated to form a unified vocabulary of 153,376 unique tokens, maintaining balanced representation across domains while preserving overall compression efficiency. Table 1 summarizes the detailed token distribution across different domains.", + "bbox": [ + 140, + 251, + 854, + 323 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/381a61c48d4873ff6cc62083bf48d8b583cd52cc7d46518e58419af2a4afcd0a.jpg", + "table_caption": [ + "Table 1: Token distribution in the unified vocabulary of Pangu Ultra." + ], + "table_footnote": [], + "table_body": "
DomainNumber of TokensPercentage (%)
English68,01744.35
Chinese41,05326.77
Other30,57319.93
Latin-based languages4,5072.94
Arabic2,7551.80
Korean2,7331.78
Mathematics2,1391.39
Japanese1,5991.04
Total153,376100.00
", + "bbox": [ + 295, + 358, + 702, + 507 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Model Training", + "text_level": 1, + "bbox": [ + 142, + 542, + 307, + 559 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, we present our training pipeline, which is similar to training state-of-the-art language models, e.g., DeepSeek-V3 [19] and Llama 3 [22]. The training process consists of three main stages: pre-training, long context extension, and post-training. Each stage has specific training strategies and data construction methods to gradually enhance the model capabilities.", + "bbox": [ + 140, + 573, + 857, + 630 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Pre-training Stage", + "text_level": 1, + "bbox": [ + 142, + 646, + 312, + 662 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We first introduce the data construction in the pre-training of Pangu Ultra, followed by the details of data verification. Then we elaborate the practical approach for the long context extension. The detailed pre-training hyper-parameters are finally presented.", + "bbox": [ + 140, + 671, + 854, + 715 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1.1 Data Construction", + "text_level": 1, + "bbox": [ + 142, + 729, + 325, + 742 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The pre-training corpus of Pangu Ultra contains high-quality and diverse 13.2T tokens produced by our tokenizer, as stated in Section 2.3. Table 2 shows the pre-training process is structured into three sequential phases: the general phase, the reasoning phase, and the annealing phase. These phases are designed to progressively develop general knowledge and linguistic capabilities, enhance reasoning skills, and further refine knowledge and behavior, respectively. The amount of data used in each phase is 12T, including 7.4T and 4.6T data in two distinct subphases, 0.8T, and 0.4T tokens.", + "bbox": [ + 140, + 752, + 854, + 835 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In the initial general training phase, we utilize a corpus focused on developing broad linguistic capabilities and general knowledge. This stage primarily consists of English and Chinese data collected from a diverse range of sources, including web pages, books, encyclopedias, etc. Data from the multilingual and various industrial domains is also incorporated. Based on our data quality assessment in Section 3.1.2, we perfer to use higher-quality data in the second sub-phrase than the first.", + "bbox": [ + 140, + 842, + 854, + 912 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/eb74fadcfd096c8aaeae1875a83d62eb78cc6b37c897972dd151607c90ddf109.jpg", + "table_caption": [ + "Table 2: Data recipe of Pangu Ultra pre-training." + ], + "table_footnote": [], + "table_body": "
DatasetGeneralReasoningAnnealing
General English54%14%21%
General Chinese13%6%20%
Multi-lingual8%4%3%
Instruction2%11%20%
Math6%28%18%
Code17%37%18%
", + "bbox": [ + 316, + 112, + 676, + 215 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the second reasoning phase, we increase the proportion of high-quality and diverse mathematical and coding data—raising it to over $60\\%$ of the corpus to enhance the reasoning capabilities of Pangu Ultra. The coding data includes both pure code and mixed text-code samples. The math data also involves a lot of English and Chinese texts. Moreover, LLM-generated synthetic data is widely incorporated to enrich the corpus.", + "bbox": [ + 140, + 250, + 854, + 306 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The third annealing phrase is designed to help the model consolidate and effectively apply the knowledge and reasoning skills acquired in the previous stages. Therefore, we place greater emphasis on instruction data, which accounts for approximately $20\\%$ of the corpus. We curate in-house question banks covering a wide range of topics and construct both short and long chain-of-thought (CoT) responses. These reasoning paths are carefully refined to ensure clarity and logical coherence.", + "bbox": [ + 140, + 311, + 854, + 381 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Overall, the pre-training data for Pangu Ultra is carefully designed to ensure high quality, diversity, and minimal redundancy. We assign quality and difficulty labels to the data and adopt a curriculum-based sampling strategy for the reasoning data across all three phases—progressing from simpler examples to more complex ones throughout the training cycle.", + "bbox": [ + 140, + 387, + 854, + 444 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1.2 Data Quality Assessment", + "text_level": 1, + "bbox": [ + 142, + 465, + 370, + 482 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Data quality assessment plays a crucial role in enhancing the overall quality of the data. Training Pangu Ultra employs both rule-based heuristics and model-based evaluation to enhance data quality.", + "bbox": [ + 140, + 494, + 854, + 523 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For model-based quality assessment, we leverage the Pangu series as the base model. To better align quality evaluation with human value judgments, we fine-tune the model using a manually annotated dataset. The fine-tuned evaluator is then applied to a large-scale pre-training corpus exceeding 10T tokens. Data samples are scored across multiple dimensions, including cleanliness, fluency, educational value, and richness. These annotated scores are then used in a prioritized sampling strategy, where higher-quality samples are assigned higher sampling probabilities.", + "bbox": [ + 140, + 527, + 854, + 612 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To validate the effectiveness of our data quality assessment, we conducted an ablation study using a proxy model with 2.6 billion parameters. Empirical results show that, to achieve comparable performance, the model trained on low-scoring data required $1.6 \\times$ more tokens than the one trained on high-quality high-scoring data. Therefore, high data quality is important for improving training efficiency.", + "bbox": [ + 140, + 618, + 854, + 674 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1.3 Pre-training Parameters", + "text_level": 1, + "bbox": [ + 140, + 696, + 366, + 712 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Pangu Ultra is trained using AdamW optimizer [48] with a weight decay of 0.1 and epsilon is set to $1 \\times 10^{-8}$ . The momentum parameters are set to $\\beta_{1} = 0.9$ and $\\beta_{2} = 0.95$ . The gradient clipping norm is set to 1.0. To improve the training stability and overall performance, the pre-training of Pangu Ultra is organized into the following phases:", + "bbox": [ + 140, + 724, + 854, + 781 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "0T-7.4T tokens The sequence length is set to 4K (RoPE base $= 1 \\times 10^{4}$ ). The batch size increases from 1,024 to 1,536 (at 1.2T) and 2,048 (at 1.9T). The increased batch size improves training efficiency and throughput. The learning rate follows a cosine decay from $1 \\times 10^{-4}$ to $1 \\times 10^{-5}$ with 4,000 warmup steps to ensure stable early training.", + "bbox": [ + 140, + 786, + 854, + 842 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "7.4T-12.0T tokens The sequence length remains at 4K with a batch size of 2,048. The learning rate is fixed at $1 \\times 10^{-5}$ in this phase.", + "bbox": [ + 140, + 848, + 852, + 876 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "12.0T-12.8T tokens The sequence length increases to 8K (RoPE base $= 1 \\times 10^{5}$ ). The batch size is reduced to 1,536. The learning rate decays from $1 \\times 10^{-5}$ to $7.5 \\times 10^{-6}$ using cosine scheduling.", + "bbox": [ + 140, + 883, + 852, + 912 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Long Context Extension", + "text_level": 1, + "bbox": [ + 142, + 90, + 354, + 107 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The ability of LLMs to understand long context inputs is critical in long-thinking process and practical applications. In the final stages of pre-training, Pangu Ultra is trained on long sequence data to support a maximum context length of 128K. The training consists of two progressive phases: the first phase expands the context length to 32K, and the second phase further expands it to 128K.", + "bbox": [ + 140, + 122, + 854, + 178 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Rotary Position Embedding (RoPE) [64] is the core module for supporting ultra-long input sequences. Existing open-source LLMs typically extend context length by either increasing the base frequency in RoPE [64, 32] or by adopting methods such as YaRN [55, 22, 19]. Our findings show that both methods perform similarly well if the hyper-parameters are correctly chosen, and we adopt the increased base frequency method in Pangu Ultra. To determine the base frequency in RoPE for long-context extension, we evaluate the offline performance of \"Needle In A Haystack\" (NIAH) with different base frequencies at the target sequence length, and select the one with the best result. This ensures a relatively low initial loss in long-context training. In practice, the selected base frequency for $32\\mathrm{K}$ is $1.6\\times 10^{6}$ , and for $128\\mathrm{K}$ is $2.56\\times 10^{7}$ . Detailed hyper-parameters of Pangu Ultra long context training are summarized below:", + "bbox": [ + 140, + 184, + 854, + 309 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "8K to 32K phase The sequence length is expanded to 32K (RoPE base $= 1.6 \\times 10^{6}$ ). The batch size is 384 with a learning rate of $7.5 \\times 10^{-6}$ , matching the final learning rate from the previous post-training stage.", + "bbox": [ + 140, + 314, + 854, + 345 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "32K to 128K phase The sequence length is further expanded to $128\\mathrm{K}$ (RoPE base $= 2.56 \\times 10^{7}$ ). The batch size is reduced to 96. The learning rate remains $7.5 \\times 10^{-6}$ .", + "bbox": [ + 140, + 349, + 854, + 380 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3 Post-training Alignment", + "text_level": 1, + "bbox": [ + 142, + 409, + 352, + 425 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In the post-training stage, Pangu Ultra is aligned with human preferences through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This stage focuses on constructing high-quality, diverse instruction data and designing scalable, efficient training strategies.", + "bbox": [ + 140, + 440, + 854, + 484 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3.1 Post-training Data", + "text_level": 1, + "bbox": [ + 142, + 512, + 326, + 527 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In constructing post-training data, we emphasize the data quality, diversity, and complexity. The data pool is curated from a wide range of domains and task types, including general question answering, AI-generated content (AIGC), text classification and analysis, programming, mathematics, logical reasoning, and tool usage. These tasks cover application areas such as finance, healthcare, and public services. Data sources span open-source instruction datasets, real-world industrial queries, and synthetic problems derived from the pre-training corpus.", + "bbox": [ + 140, + 541, + 854, + 626 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To promote data diversity, data samples are selected along two orthogonal dimensions, guided by the entropy law [74]: domain and task type. Hierarchical tagging models with varying levels of granularity are used to support balanced data sampling. Data quality is managed through a combination of rule-based validation and model-based validation, which helps eliminate low-quality or ambiguous samples.", + "bbox": [ + 140, + 630, + 854, + 688 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "To better stimulate the reasoning capabilities of Pangu Ultra, a large portion of the post-training data, approximately six-sevenths, consists of reasoning tasks such as mathematics, coding, and logic. The post-training data covers a range of complexities, with a focus on moderately to highly challenging tasks.", + "bbox": [ + 140, + 691, + 854, + 737 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.3.2 Post-training Strategy", + "text_level": 1, + "bbox": [ + 142, + 763, + 351, + 780 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In the post-training stage, Pangu Ultra was first trained with SFT to establish preliminary instruction-following capabilities. Following SFT, we apply RL with outcome-based reward signals to further enhance reasoning, alignment, and instruction-following abilities of Pangu Ultra.", + "bbox": [ + 140, + 792, + 854, + 837 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We implement a latency-tolerant reinforcement learning framework optimized for the Ascend infrastructure, which will be detailed in a future report. The framework enables efficient large-scale policy optimization on Ascend. To guide the RL process, we implement a hybrid reward system that provides task-specific feedback for mathematics, coding, and general problem-solving. This hybrid reward system combines deterministic reward signals and model-based evaluations to facilitate stable and efficient policy optimization.", + "bbox": [ + 140, + 842, + 854, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Training System", + "text_level": 1, + "bbox": [ + 142, + 89, + 313, + 107 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Training our Pangu Ultra with 135B parameters on 13.2 trillion tokens necessitates the need to ensure training stability and efficiency in large-scale computing cluster. In this section, we elaborate the details of our training system from two important perspectives: parallelization strategies and system-level optimization techniques, in Section 4.2 and Section 4.3. Overall, we achieve over $52\\%$ Model FLOPs Utilization (MFU) when training Pangu Ultra on 8,192 Ascend NPUs.", + "bbox": [ + 140, + 122, + 857, + 194 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 Computing Setup", + "text_level": 1, + "bbox": [ + 140, + 210, + 307, + 226 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "A computing cluster with 8,192 Ascend Neural Processing Units (NPUs) [5, 6] is deployed to train Pangu Ultra. Each node in the cluster houses 8 NPUs, interconnected via Huawei Cache Coherence System (HCCS) using a full-mesh topology, and each device is equipped with 64GB Memory. Inter-node communication is facilitated through RDMA over Converged Ethernet (RoCE) fabric, leveraging 200 Gbps interconnects for communication between NPUs across different nodes.", + "bbox": [ + 140, + 237, + 857, + 308 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2 Parallelism Strategies for Model Scaling", + "text_level": 1, + "bbox": [ + 140, + 325, + 464, + 342 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In order to scale model training $^1$ , we leverage a combination of different parallelism strategies to distributes the model across multiple NPUs, including Data Parallelism (DP) [43], Tensor Parallelism (TP) [63], Sequence Parallelism (SP) [39], and Pipeline Parallelism (PP) [30, 51]. For Pangu Ultra, 128-way DP with ZERO [58] is performed to reduce the memory cost of model parameters and the associated optimizer states. 8-way TP is applied to leverage the high intra-node bandwidth for efficient activation transfer, while 8-way PP is adopted to utilize inter-node connections, since it only requires transmitting activations at the partition boundaries. However, as mentioned in existing studies [35, 30, 51, 56], pipeline parallelism encounters severe PP bubbles when the training cluster scales up, primarily due to batch size constraints [35]. For one-forward-one-backward (1F1B) PP scheduling, the bubble ratio is defined as $\\frac{p - 1}{p - 1 + n}$ , where $p$ represents the number of pipeline stages and $n$ denotes the number of micro batches for every DP. The ratio represents the idle time of accelerators, as shown in Figure 2. A large-scale training cluster increases the number of DPs, which in turn reduces the number of micro batches assigned to each DP due to batch size constraints, leading to a significant increase in the bubble ratio. Therefore, minimizing bubble ratio is crucial for improving system efficiency. Under such circumstances, we employ interleaved pipeline-parallel scheduling with 6-way virtual PP stages on each device [52] and manage to reduce it from $30.45\\%$ to $6.8\\%$ . Through careful tuning of load balancing across PP and VPP stages, we are able to achieve approximately $43\\%$ MFU on an 8,192 NPU cluster as a baseline.", + "bbox": [ + 140, + 352, + 854, + 579 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg", + "image_caption": [ + "Figure 2: Pipeline parallelism and the interleaved pipeline-parallel scheduling." + ], + "image_footnote": [], + "bbox": [ + 212, + 595, + 777, + 830 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "1The training of Pangu Ultra is supported by MindSpeed [8] and Megatron [7, 63] framework, which provides comprehensive parallel strategies and system optimization methods.", + "bbox": [ + 140, + 883, + 854, + 912 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3 System Optimization", + "text_level": 1, + "bbox": [ + 142, + 90, + 330, + 107 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Based on the optimizations outlined in Section 4.2 that achieved $43\\%$ MFU, additional system-level enhancements are implemented to push training efficiency to new heights. Through a combination of kernel fusions, context parallelism via subsequence partitioning, data caching and sharing mechanisms, and other refinements, Pangu Ultra benefits from a significant improvement in training efficiency. These comprehensive optimizations enable the system to achieve over $52\\%$ MFU, representing a $9\\%$ relative improvement compared to the baseline configuration mentioned in Section 4.2.", + "bbox": [ + 140, + 117, + 857, + 202 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg", + "image_caption": [ + "(b) The MC2 implementation", + "Figure 3: A Comparison of the default transformer computation and the MC2 method. Note that in actual training, communication and computation tasks are fused into a single kernel in MC2." + ], + "image_footnote": [], + "bbox": [ + 253, + 222, + 743, + 335 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3.1 Kernel Fusion", + "text_level": 1, + "bbox": [ + 142, + 422, + 294, + 436 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Kernel fusion is widely adopted in LLM training to enhance efficiency. It combines multiple operations into a single kernel, reducing the number of data accesses to global memory [17]. During the training phase of Pangu Ultra, key operators are fused, resulting in significant improvements in hardware utilization and overall training efficiency.", + "bbox": [ + 140, + 449, + 854, + 505 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "MC2 - Merged Compute and Communication Tensor parallelism, when combined with sequence parallelism, introduces All-Gather (AG) and Reduce-Scatter (RS) communication operations for exchanging input and output activations across distributed devices. This approach exhibits a direct dependency between matrix multiplication (MatMul) and AG/RS communications, which fundamentally constrains the overlapping of TP communication with computational workflows. The MC2 is implemented [2, 3] to tackle this challenge by fusing MatMul computations with communication operations. It decomposes large computation and communication tasks into fine-grained subtasks and employs pipelined execution to maximize overlap between communication and computation. Thus, MC2 significantly reduces communication latency and improves hardware utilization (Figure 3).", + "bbox": [ + 140, + 511, + 854, + 636 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "NPU Fusion Attention Training LLMs with long sequence length suffers from quadratic memory and computational requirements in self-attention mechanisms as sequence length grows. To address these challenges, Flash Attention (FA) has emerged as a standard technique in LLM training owing to its superior performance [18, 17]. Pangu Ultra leverages a self-attention fusion operator, called NPU Fusion Attention (NFA)[9], which is specifically optimized for Ascend NPUs, offering system-level improvements across a wide range of self-attention computation scenarios.", + "bbox": [ + 140, + 642, + 857, + 726 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg", + "image_caption": [ + "Figure 4: Examples of attention mask compression for the NFA operator." + ], + "image_footnote": [], + "bbox": [ + 362, + 744, + 643, + 886 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "It is worth mentioning that Pangu Ultra uses a reset attention mask strategy to prevent self-attention between different documents within a sequence. This requires calculating the corresponding attention mask for every sequence, leading to significant memory and computational overhead. To mitigate the time and memory requirements of generating attention masks, the NFA operator employs a mask compression optimization. As shown in Figure 4, NFA utilizes a $2048 \\times 2048$ causal mask as a template to construct the computational mask within the fusion attention operator. For every iteration, Pangu Ultra retrieves the actual sequence length based on the position of the end-of-document (eod) token, which is then provided as input to the NFA operator to accelerate the computation of self-attention. The detailed usage of NFA is provided in the Ascend documentation [9].", + "bbox": [ + 140, + 90, + 854, + 217 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Other Kernel Fusions for Efficiency In addition to MC2 and NPU-optimized fused attention, we also integrate a series of kernel fusion optimizations within key components such as RMSNorm [77], SwiGLU [60], and rotary positional embeddings (RoPE) [64], as well as critical processes including gradient accumulation and PP send/receive communications. These fusion operators are designed to reduce kernel launch and memory access overheads, while maintaining high numerical precision and enhancing overall training performance.", + "bbox": [ + 140, + 222, + 854, + 292 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg", + "image_caption": [ + "(a) Original" + ], + "image_footnote": [], + "bbox": [ + 143, + 349, + 305, + 450 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg", + "image_caption": [ + "Causal Masking", + "(b) Megatron" + ], + "image_footnote": [], + "bbox": [ + 323, + 349, + 488, + 450 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg", + "image_caption": [ + "Reset of Attention Mask", + "(c) Megatron" + ], + "image_footnote": [], + "bbox": [ + 508, + 349, + 673, + 450 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg", + "image_caption": [ + "(d) Ours", + "Figure 5: Examples of the mechanism of sub-sequence partitioning for context parallelism." + ], + "image_footnote": [], + "bbox": [ + 689, + 351, + 848, + 452 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.3.2 Optimization for Long Context Training", + "text_level": 1, + "bbox": [ + 140, + 561, + 477, + 575 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Scaling long-context capabilities is becoming increasingly important for applications such as long document summarization and conversational AI. However, training on long sequences presents several challenges in terms of both time and memory complexity. To improve the efficiency of long-context training, we propose two key strategies, as outlined below.", + "bbox": [ + 140, + 593, + 852, + 650 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Sub-Sequence Partitioning for Context Parallelism Context parallelism (CP) is an crucial approach for the training of very long sequences, that divides the input sequence into segments to reduce memory consumption [44, 33]. Yet, with causal masking, simply splitting the sequence into $CP$ chunks results in a severely imbalanced workload for Ring Self-Attention (RSA) [44] (as shown in Figure 5(a)). Megatron-LM addresses this issue by splitting the sequence into $2 \\times CP$ chunks, where each rank receives chunks from both the top and bottom, thus balancing the workload within a CP group (Figure 5(b)) [7]. However, this method still results in an imbalanced workload when the attention mask is reset (Figure 5(c)). Therefore, in training with 128k-long contexts, we propose a load-balanced partitioning strategy for CP training, where each rank is responsible for computing two chunks within each subsequence (Figure 5(d)).", + "bbox": [ + 140, + 656, + 852, + 781 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Fast Mask Generation and Data Reuse When scaling the training sequence of Pangu Ultra up to 128k, the generation of the attention mask or the calculation of the actual sequence length still incurs a non-negligible performance overhead. Additionally, in the training scenario with reset attention masks, each VPP stage is required to retrieve the corresponding mask or actual sequence length in every iteration, resulting in redundant computations and increased overhead. We optimize these problems by (1) using efficient NPU operators to compute the attention mask, instead of constructing it on the CPU, thus accelerating mask generation and eliminating the need for data transfer between the CPU and NPU, and (2) enabling cross-VPP stage mask sharing, where attention masks are generated by the first stage (VPP0) and shared across different VPP stages on the same rank, thereby avoiding redundant mask computations and memory cost.", + "bbox": [ + 140, + 786, + 852, + 912 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "5 Results", + "text_level": 1, + "bbox": [ + 142, + 89, + 238, + 104 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In this section, we discuss the evaluation results of Pangu Ultra, including pre-training performance and posttraining outcomes. In addition, we provide comprehensive ablation studies that exam the model architecture and further discuss the observations of training Pangu Ultra.", + "bbox": [ + 140, + 121, + 857, + 162 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "5.1 Pre-Training Training Loss Curve", + "text_level": 1, + "bbox": [ + 140, + 180, + 421, + 196 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Figure 6 shows the training loss curve of Pangu Ultra during the entire pre-training. Each segment in the loss curve corresponds to one training stage, as described in Section 3.1.3. The loss curves demonstrate consistent descending trends across all training stages. For the second interval, although the descent rate moderated due to a constant learning rate, the performance metrics continued to show steady improvement throughout this interval.", + "bbox": [ + 140, + 205, + 854, + 275 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg", + "image_caption": [ + "Figure 6: The training loss curve of Pangu Ultra during the pre-training stage." + ], + "image_footnote": [], + "bbox": [ + 256, + 294, + 740, + 556 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Zero loss spike As shown in Figure 6, no loss spikes occur throughout the entire pre-training process. While such spikes are common in LLM training [66], the absence of them here underscores the importance of our depth-scaled sandwich norm and TinyInit in ensuring stable training. The negative effect of loss spike to the model performance will be further elaborated in Section 5.4.1.", + "bbox": [ + 140, + 619, + 854, + 676 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "5.2 Pre-Training Stage", + "text_level": 1, + "bbox": [ + 142, + 691, + 316, + 708 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Benchmarks We evaluate Pangu Ultra base model across multiple domains using open-source benchmarks, including language understanding, question answering, code generation, and math problem solving. The evaluation mainly uses English and Chinese test sets, with some additional multilingual benchmarks for broader coverage.", + "bbox": [ + 140, + 719, + 854, + 776 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Language understanding: We employ Hellaswag [76] and Winogrande for contextual reasoning tasks, DROP [21], RACE [42], and ARC [15] series for comprehensive reading comprehension evaluation, along with PIQA [12], Natural Questions [41] and TriviaQA [37] to assess knowledge retrieval.", + "- Question answering: The assessment includes C-Eval [31] for Chinese knowledge, MMLU [27] and its advanced variant MMLU-Pro [70] for English domain knowledge, supplemented by BigBenchHard [65] to evaluate creative problem-solving", + "- Code generation and understanding: We utilize HumanEval [13] and MBPP [10] for standard code generation tasks, while CruxEval [26] for code understanding and reasoning." + ], + "bbox": [ + 140, + 787, + 854, + 912 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 935, + 509, + 946 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "- Mathematical Reasoning: We measure skills with $CMath$ [71] and $GSM8K$ [16] for fundamental arithmetic and simple problems, $MATH$ [28] for advanced mathematical reasoning, and $MGSM$ [61] for multilingual math problem solving.", + "bbox": [ + 142, + 90, + 854, + 133 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Baselines & Comparison Settings We compare Pangu Ultra against several strong baselines covers both dense models (Qwen2.5-72B, Llama-405B) and MoE architectures (DeepSeek-V3). For base models, the majority of our evaluations employ few-shot inputs, with a minority using zero-shot prompts. We evaluate most benchmarks with gold answers through exact matching, while employing execution-based verification for code generation tasks.", + "bbox": [ + 140, + 151, + 854, + 220 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Evaluation Results In Table 3, we compare the pre-trained base model of Pangu Ultra with other leading models. Overall, Pangu Ultra achieves state-of-the-art performance on most general English benchmarks and all Chinese benchmarks. While it trails DeepSeek V3 on code and math-related tasks, it performs competitively on these domains.", + "bbox": [ + 140, + 239, + 854, + 296 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A closer examination reveals that Pangu Ultra excels on Chinese benchmarks, surpassing both Qwen 2.5 72B and DeepSeek V3, the current best-performing Chinese model. In addition, when compared to Llama 3.1 405B, Pangu Ultra achieves better scores on most of the challenging benchmarks, while utilizing only about $29\\%$ of the training FLOPs required by Llama 405B. These results suggest the effectiveness of our model architecture and the high quality of our training data.", + "bbox": [ + 140, + 301, + 854, + 373 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/688e6f49bbae37cf3b66fea8df45d115891068481814963c9c72a37797b11531.jpg", + "table_caption": [ + "Table 3: Comparison of Pangu Ultra and other representative models across a diverse set of benchmarks for evaluating language, coding and mathematical skills. Bold values represent the best results in each line, and underlined values represent Pangu Ultra is the best among dense models." + ], + "table_footnote": [], + "table_body": "
Benchmark (Metric)# ShotsQwen2.5 72B BaseLlama-3.1 405B BaseDeepSeek V3 BasePangu Ultra Base
Architecture-DenseDenseMoEDense
# Activated Params-72B405B37B135B
# Total Params-72B405B671B135B
EnglishBBH (EM)3-shot79.882.987.579.1
MMLU (EM)5-shot85.084.487.185.4
MMLU-Pro (EM)5-shot58.352.864.463.1
DROP (F1)3-shot80.686.089.061.0
ARC-Easy (EM)25-shot98.498.498.9100.0
ARC-Challenge (EM)25-shot94.595.395.397.0
HellaSwag (EM)10-shot84.889.288.999.0
PIQA (EM)0-shot82.685.984.798.0
WinoGrande (EM)5-shot82.385.284.991.0
RACE-Middle (EM)5-shot68.174.267.197.0
RACE-High (EM)5-shot50.356.851.397.0
TriviaQA (EM)5-shot71.982.782.990.5
NaturalQuestions (EM)5-shot33.241.540.052.7
AGIEval (EM)0-shot75.860.679.680.4
CodeHumanEval (Pass@1)0-shot53.054.965.281.1
MBPP (Pass@1)3-shot72.668.475.472
CRUXEval-I (EM)2-shot59.158.567.361.8
CRUXEval-O (EM)2-shot59.959.969.861.5
MathGSM8K (EM)8-shot88.383.589.389.3
MATH (EM)4-shot54.449.061.662.5
MGSM (EM)8-shot76.269.979.875.1
CMath (EM)3-shot84.577.390.778.2
ChineseCLUEWSC (EM)5-shot82.583.082.795.0
C-Eval (EM)5-shot89.272.590.190.3
CMMLU (EM)5-shot89.573.788.891.7
CMRC (EM)1-shot75.876.076.386.0
C3 (EM)0-shot76.779.778.699.0
CCPM (EM)0-shot88.578.692.093.0
", + "bbox": [ + 207, + 441, + 784, + 898 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "5.3 Post-Training and Reasoning Capability", + "text_level": 1, + "bbox": [ + 142, + 90, + 464, + 107 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Benchmarks We conduct a comprehensive evaluation of the Pangu Ultra's capabilities over reasoning and non-reasoning tasks:", + "bbox": [ + 140, + 116, + 854, + 147 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Sophisticated reasoning tasks encompass three specialized subcategories: mathematical competence measured by AIME 2024 [49] and MATH-500, Coding competition benchmarks LiveCodeBench [34] and scientific reasoning task GPQA Diamond [59];", + "- General language comprehension and reasoning capabilities, represented by MMLU-Pro [24], Arena Hard [45]." + ], + "bbox": [ + 140, + 157, + 856, + 233 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Baselines & Comparison Settings We compare Pangu Ultra against strong baselines including GPT-4o0513, reasoning models DeepSeek-R1, Hunyuan-T1 and large dense models, Qwen2.5-72B-Instruct and Mistral-Large 2. We use Pass@1 averaged over multiple independent runs as the evaluation metric to assess the performance.", + "bbox": [ + 140, + 247, + 854, + 304 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Evaluation Results In Table 4, we compare the evaluation results of Pangu Ultra with other baseline models. Pangu Ultra achieves state-of-the-art performance on the reasoning benchmarks including AIME 2024, MATH-500, GPQA and LiveCodeBench, while maintaining strong capabilities in general language comprehension tasks.", + "bbox": [ + 140, + 318, + 852, + 376 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "When compared to dense LLMs (Qwen and Mistral-Large 2), Pangu Ultra shows particularly significant advantages in reasoning tasks. This superior performance stems from the 0.8T reasoning-focused data used in pre-training (Section 3.1.3). The reasoning-enhanced base model substantially benefits subsequent post-training phases.", + "bbox": [ + 140, + 380, + 854, + 439 + ], + "page_idx": 11 + }, + { + "type": "table", + "img_path": "images/7919ae6ab8d9a21337ecd5d2e2e396908f906d778d62011a971c810fa5816360.jpg", + "table_caption": [ + "Table 4: Comparison of Pangu Ultra models and other representative models across benchmarks. $\\dagger$ indicates results from Artificial Analysis [1]." + ], + "table_footnote": [], + "table_body": "
ModelAIME 2024MATH-500GPQA DiamondLiveCode BenchArenaHardMMLU-pro
GPT-4o-05139.374.649.932.980.472.6
Qwen2.5-72B16.083.14927.681.272.0
Mistral-Large 2†11.073.648.629.3-69.7
Hunyuan-T179.896.269.364.991.987.2
DeepSeek-R179.897.371.565.992.384.0
Pangu Ultra80.897.474.266.591.584.4
", + "bbox": [ + 145, + 488, + 849, + 625 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "5.4 Ablation Studies", + "text_level": 1, + "bbox": [ + 142, + 650, + 302, + 666 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "This section presents additional ablation studies of the model architecture and analyzes key training behaviors to facilitate a deeper understanding and discussion of dense LLM training.", + "bbox": [ + 140, + 676, + 854, + 705 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "5.4.1 Depth-scaled Sandwich-norm", + "text_level": 1, + "bbox": [ + 142, + 719, + 403, + 734 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We conducted experiments to validate the effectiveness of depth-scaled sandwich norm compared to pre-norm architectures. Using a dense Transformer model with 13 billion parameters trained on 300 billion tokens with identical hyperparameters for both configurations, we observe significant improvements.", + "bbox": [ + 140, + 743, + 852, + 787 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Figure 7 shows the depth-scaled sandwich-norm architecture stabilizes gradient norms and effectively eliminates loss spikes, leading to faster training convergence. We evaluated performance on two composite benchmarks: EN basic, consisting of multiple English benchmarks, and ZH basic, representing Chinese benchmarks. Additional evaluation using LAMBADA [54] (English) and WPLC [23] (Chinese) next-token prediction tasks confirmed the advantage of applying depth-scaled sandwich-norm. The results clearly suggest that preventing loss spikes during pre-training is crucial for optimal model performance.", + "bbox": [ + 140, + 791, + 854, + 876 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "To further ablate the effect of our depth-scaled factor in RMSNorm initialization, we compare with the plain sandwich-norm that does not have the $\\sqrt{L}$ scaling factor in Eq. (1). Here, we use a proxy model containing 1.6", + "bbox": [ + 140, + 880, + 854, + 912 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 935, + 509, + 946 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg", + "image_caption": [ + "(a) Loss" + ], + "image_footnote": [], + "bbox": [ + 151, + 102, + 480, + 265 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg", + "image_caption": [ + "(b) Gradient norm", + "Figure 7: Pre-training loss and gradient norm for a 13B model using Pre-LN and Depth-Scaled Sandwich-Norm (DSSN). The curves with Pre-LN has significant spikes, which harm the trained model, while the curves of DSSN are much smoother." + ], + "image_footnote": [], + "bbox": [ + 509, + 102, + 846, + 266 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/bdbab18d054876503d9911625ccd93b413947f5c0cf2e0dc1198f3ecd08db00e.jpg", + "table_caption": [ + "Table 5: Performance comparison between Pre-LN and Depth-scaled Sandwich-Norm." + ], + "table_footnote": [], + "table_body": "
ModelTokens (B)EN basicZH basicLAMBADAWPLC
Pre-LN3000.420.520.6750.194
Depth-scaled sandwich-norm3000.450.540.6930.224
", + "bbox": [ + 181, + 373, + 815, + 428 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "billion parameters and 94 layers, which has the same depth with Pangu Ultra. By using this proxy model, we examine the effectiveness of sandwich-norm on training very deep Transformers. In Figure 8, we can observe some loss spikes with the plain sandwich-norm, but our depth-scaled sandwich-norm can be trained smoothly, and attains lower loss.", + "bbox": [ + 140, + 450, + 854, + 507 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg", + "image_caption": [ + "Figure 8: Pre-training loss for a 94-layer 1.6B model using original and depth-scaled sandwich-norm. The original sandwich-norm still suffers loss spikes during training." + ], + "image_footnote": [], + "bbox": [ + 290, + 522, + 709, + 734 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "5.4.2 Tiny Initialization", + "text_level": 1, + "bbox": [ + 142, + 799, + 321, + 814 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "We conduct experiments to study the effectiveness of TinyInit proposed in Section 2.2. After being trained on 102 billion tokens, Pangu Ultra initialized with TinyInit strategy, with standard deviation $\\sqrt{\\frac{1}{2dL}}$ , performs significantly better than the baseline model that utilizes traditional initialization, whose standard deviations are $\\sqrt{\\frac{2}{5d}}$ and $\\sqrt{\\frac{2}{5dL}}$ . The results are shown in Table 6. BIG-bench (aug) is a test set developed internally through data augmentation of the original BIG-bench, designed to mitigate the impact of test set leakage.", + "bbox": [ + 140, + 823, + 852, + 912 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/6a81bc754358c61ca73a33d2cb633c9cb433eeb8b601d84ad3ea2a99f5e50d84.jpg", + "table_caption": [ + "Table 6: Performance comparison of traditional initialization and TinyInit." + ], + "table_footnote": [], + "table_body": "
ModelTokens (B)EN basicZH basicLAMBADAWPLCC-EvalMMLUBIG-bench (aug)
Baseline1020.4440.5380.6940.2290.4760.4730.357
TinyInit1020.4560.5370.7270.2570.5240.5020.384
", + "bbox": [ + 143, + 112, + 869, + 167 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5.4.3 Layer Statistics of Pangu Ultra", + "text_level": 1, + "bbox": [ + 140, + 190, + 410, + 205 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Stable activation scale Figure 9 presents the activation patterns of attention and FFN modules across Transformer layers, showing the mean, standard deviation, and top-1 activation values. The activation distributions demonstrate stability, with standard deviations maintaining consistent scales throughout pretraining while preserving a clear layer-wise pattern. Our analysis reveals the presence of \"super activations\", whose magnitude reaches $10^{3}$ magnitude in shallow layers, a phenomenon consistent with findings in the Llama model [75]. Notably, Figure 9 illustrates that these top-1 activation values progressively decrease with layer depth, indicating that their influence becomes relatively small on the final output.", + "bbox": [ + 140, + 214, + 854, + 313 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 142, + 323, + 326, + 457 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 328, + 323, + 500, + 457 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 323, + 676, + 457 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 676, + 323, + 852, + 457 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg", + "image_caption": [ + "(a) Down projection" + ], + "image_footnote": [], + "bbox": [ + 140, + 457, + 326, + 590 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg", + "image_caption": [ + "(b) Up & Gate projection" + ], + "image_footnote": [], + "bbox": [ + 326, + 457, + 501, + 590 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg", + "image_caption": [ + "Figure 9: Activation of attention and FFN modules. Mean, standard deviation, and top-1 value of activations are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. The \"Std\" row shows the stable activation scale across layers. The \"Top 1\" row shows the existence of the \"super activations\" in down projection and attention output projection, with magnitudes falling within a reasonable range and comparable to those observed in the LLaMA model [75]." + ], + "image_footnote": [], + "bbox": [ + 501, + 457, + 676, + 590 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg", + "image_caption": [ + "(c) Attention output projection", + "(d) Attention QKV projection" + ], + "image_footnote": [], + "bbox": [ + 679, + 457, + 852, + 590 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Layer-wise patterns of depth-scaled sandwich norm. Figure 10 presents the distribution of scaling parameters $\\gamma$ across all sandwich-norm layers, revealing several key observations: All four LayerNorm $\\gamma$ parameters exhibit decreasing mean/standard deviation during training, consistent with weight decay effects. Post-norm $\\gamma$ values show layer-dependent patterns: The standard deviation of post-norm $\\gamma$ increases substantially with layer depth. Pre-norm $\\gamma$ maintains relatively constant standard deviation across layers. This pattern suggests an intriguing model behavior: shallow layers rely primarily on residual connections, while deeper layers progressively emphasize transformer layer outputs as the scaling factor $\\gamma$ grows in magnitude.", + "bbox": [ + 140, + 707, + 852, + 806 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 140, + 825, + 269, + 839 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We present Pangu Ultra, a dense language foundation model with 135 billion parameters trained on Ascend NPUs. To address challenges in training large-scale deep models, we propose depth-scaled sandwich-norm, enabling Pangu Ultra to achieve remarkable training stability without significant loss spikes. After being pre-trained on 13.2 trillion tokens and long context extension on 8,192 Ascend NPUs, our model further", + "bbox": [ + 140, + 854, + 854, + 912 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 140, + 85, + 326, + 220 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 328, + 85, + 504, + 220 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 85, + 678, + 220 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 678, + 85, + 856, + 220 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg", + "image_caption": [ + "(a) Post-norm after attention" + ], + "image_footnote": [], + "bbox": [ + 143, + 222, + 326, + 354 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg", + "image_caption": [ + "(b) Post-norm after FFN" + ], + "image_footnote": [], + "bbox": [ + 328, + 222, + 504, + 354 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg", + "image_caption": [ + "Figure 10: Distribution of sandwich-norm's $\\gamma$ parameter. Mean and standard deviation are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. There is a clear layer-wise pattern of the two post-norms: the mean and std value of $\\gamma$ increase with depth. Larger post-norm $\\gamma$ indicates deeper layers emphasize more on transformer outputs instead of residual connections." + ], + "image_footnote": [], + "bbox": [ + 504, + 220, + 679, + 354 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg", + "image_caption": [ + "(c) Post-norm before attention", + "(d) Post-norm before FFN" + ], + "image_footnote": [], + "bbox": [ + 674, + 222, + 856, + 354 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "enhances its reasoning capabilities through Supervised Fine-Tuning and Reinforcement Learning. Extensive experiments lead to the observation that Pangu Ultra not only surpasses state-of-the-art dense LLMs like Llama 405B and Mistral Large 2 but also delivers competitive performance against larger sparse models such as DeepSeek-R1. These results highlight the efficacy of our architectural and systemic optimizations, paving the way for future advancements in scalable and efficient LLM training. In addition, our experience demonstrates that the Ascend NPUs are capable of training dense models with hundreds of billions of parameters.", + "bbox": [ + 140, + 462, + 852, + 546 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 142, + 566, + 238, + 580 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Artificial analysis. https://artificialanalysis.ai/.", + "[2] Ascend mc2. https://citee.com/qingfenxiaochong/MindSpeed/blob/master/docs/features/mc2.md.", + "[3] Ascend mc2. https://www.hiasmend.com/developer/techArticles/20240613-1.", + "[4] Flash attention. https://github.com/Dao-AILab/flash-attention.", + "[5] Huawei atlas 800t a2. https://e.huawei.com/cn/products/computing/ascend/ atlas-800t-a2.", + "[6] Huawei atlas 800t a2 technical specifications. https://support.huawei.com/enterprise/en/doc/EDOC1100349804/2bf2c017/technical-specifications?idPath=23710424|251366513|22892968|252309113|254184887.", + "[7] Megatron-lm. https://github.com/NVIDIA/Megatron-LM.", + "[8] Mindspeed. https://citee.com/ascend/MindSpeed.", + "[9] Npu fusion attention. https://www.hiasmend.com/document/detail/zh/Pytorch/60RC1/apiref/apilist/ptaoplist_000139.html.", + "[10] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. ArXiv, abs/2108.07732, 2021.", + "[11] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023." + ], + "bbox": [ + 143, + 590, + 854, + 912 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[12] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence, 2019.", + "[13] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021.", + "[14] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022.", + "[15] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018.", + "[16] Karl Cobbe, Vineet Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021.", + "[17] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Representations, 2024.", + "[18] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.", + "[19] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y. K. Li, Y. Q. Wang, Y. X. Wei, Y. X. Zhu, Yang Zhang, Yanhong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng" + ], + "bbox": [ + 142, + 90, + 856, + 912 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025.", + "[20] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 19822-19835. Curran Associates, Inc., 2021.", + "[21] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics, 2019.", + "[22] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.", + "[23] Huibin Ge, Chenxi Sun, Deyi Xiong, and Qun Liu. Chinese wplc: A chinese dataset for evaluating pretrained language models on word prediction given long-range context. In Conference on Empirical Methods in Natural Language Processing, 2021.", + "[24] Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, et al. Are we done with mmlu? arXiv preprint arXiv:2406.04127, 2024.", + "[25] Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.", + "[26] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida Wang. Cruxeval: A benchmark for code reasoning, understanding and execution. ArXiv, abs/2401.03065, 2024.", + "[27] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020.", + "[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. *ArXiv*, abs/2103.03874, 2021.", + "[29] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019.", + "[30] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 103-112, 2019.", + "[31] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Fanchao Qi, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. ArXiv, abs/2305.08322, 2023.", + "[32] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024.", + "[33] Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, and Yuxiong He. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models, 2023.", + "[34] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024." + ], + "bbox": [ + 143, + 90, + 856, + 912 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[35] Ziheng Jiang, Haibin Lin, Yinmin Zhong, Qi Huang, Yangrui Chen, Zhi Zhang, Yanghua Peng, Xiang Li, Cong Xie, Shibiao Nong, Yulu Jia, Sun He, Hongmin Chen, Zhihao Bai, Qi Hou, Shipeng Yan, Ding Zhou, Yiyao Sheng, Zhuo Jiang, Haohan Xu, Haoran Wei, Zhang Zhang, Pengfei Nie, Leqi Zou, Sida Zhao, Liang Xiang, Zherui Liu, Zhe Li, Xiaoying Jia, Jianxi Ye, Xin Jin, and Xin Liu. Megascale: Scaling large language model training to more than 10,000 gpus, 2024.", + "[36] Cameron R Jones and Benjamin K Bergen. Large language models pass the Turing test. arXiv preprint arXiv:2503.23674, 2025.", + "[37] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. ArXiv, abs/1705.03551, 2017.", + "[38] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.", + "[39] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022.", + "[40] Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Eduardo Blanco and Wei Lu, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics.", + "[41] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc V. Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019.", + "[42] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. Race: Large-scale reading comprehension dataset from examinations. ArXiv, abs/1704.04683, 2017.", + "[43] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training. CoRR, abs/2006.15704, 2020.", + "[44] Shenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, and Yang You. Sequence parallelism: Long sequence training from system perspective. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2391-2404, Toronto, Canada, July 2023. Association for Computational Linguistics.", + "[45] Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. From live data to high-quality benchmarks: The arena-hard pipeline, April 2024.", + "[46] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434, 2024.", + "[47] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. In EMNLP (1), pages 5747-5763. Association for Computational Linguistics, 2020.", + "[48] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.", + "[49] MAA. Codeforces. American Invitational Mathematics Examination - AIME 2024, 2024. https://maa.org/math-competitions/american-invitational-mathematics-examination-aime.", + "[50] William Merrill and Ashish Sabharwal. A little depth goes a long way: The expressive power of log-depth transformers, 2025.", + "[51] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, Huntsville, ON, Canada, October 27-30, 2019, pages 1-15. ACM, 2019.", + "[52] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training ongpu clusters using megatron-lm. In" + ], + "bbox": [ + 143, + 90, + 854, + 912 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '21, New York, NY, USA, 2021. Association for Computing Machinery.", + "[53] Toan Q Nguyen and Julian Salazar. Transformers without tears: Improving the normalization of self-attention. arXiv preprint arXiv:1910.05895, 2019.", + "[54] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and R. Fernández. The lambada dataset: Word prediction requiring a broad discourse context. ArXiv, abs/1606.06031, 2016.", + "[55] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models, 2023.", + "[56] Penghui Qi, Xinyi Wan, Guangxing Huang, and Min Lin. Zero bubble pipeline parallelism, 2023.", + "[57] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.", + "[58] Samyam Rajbhandari, Jeff Rasley, Olatunj Ruwase, and Yuxiong He. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020.", + "[59] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024.", + "[60] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020.", + "[61] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. Language models are multilingual chain-of-thought reasoners. ArXiv, abs/2210.03057, 2022.", + "[62] Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. 1999.", + "[63] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020.", + "[64] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023.", + "[65] Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. In Annual Meeting of the Association for Computational Linguistics, 2022.", + "[66] Sho Takase, Shun Kiyono, Sosuke Kobayashi, and Jun Suzuki. Spike no more: Stabilizing the pre-training of large language models, 2024.", + "[67] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.", + "[68] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.", + "[69] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In Anna Korhonen, David Traum, and Lluis Márquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy, July 2019. Association for Computational Linguistics.", + "[70] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max W.F. Ku, Kai Wang, Alex Zhuang, Rongqi \"Richard\" Fan, Xiang Yue, and Wenhu Chen. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. *ArXiv*, abs/2406.01574, 2024.", + "[71] Tianwen Wei, Jian Luan, W. Liu, Shuang Dong, and Bin Quan Wang. Cmath: Can your language model pass chinese elementary school math test? ArXiv, abs/2306.16636, 2023." + ], + "bbox": [ + 143, + 90, + 856, + 912 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[72] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024.", + "[73] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process, 2024.", + "[74] Mingjia Yin, Chuhan Wu, Yufei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong Liu, Ruiming Tang, Defu Lian, and Enhong Chen. Entropy law: The story behind data compression and llm performance. arXiv preprint arXiv:2407.06645, 2024.", + "[75] Mengxia Yu, De Wang, Qi Shan, Colorado Reed, and Alvin Wan. The super weight in large language models. ArXiv, abs/2411.07191, 2024.", + "[76] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics, 2019.", + "[77] Biao Zhang and Rico Sennrich. Root mean square layer normalization, 2019." + ], + "bbox": [ + 143, + 90, + 854, + 297 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A Contributions and Acknowledgments", + "text_level": 1, + "bbox": [ + 143, + 89, + 491, + 108 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Core Contributors Yichun Yin, Wenyong Huang, Kaikai Song, Yehui Tang, Xueyu Wu, Wei Guo, Peng Guo, Yaoyuan Wang, Xiaojun Meng, Yasheng Wang, Dong Li, Can Chen, Dandan Tu, Yin Li, Fisher Yu, Ruiming Tang, Yunhe Wang", + "bbox": [ + 143, + 119, + 854, + 162 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Contributors Baojun Wang, Bin Wang, Bo Wang, Boxiao Liu, Changzheng Zhang, Duyu Tang, Fei Mi, Hui Jin, Jiansheng Wei, Jiarui Qin, Jinpeng Li, Jun Zhao, Liqun Deng, Lin Li, Minghui Xu, Naifu Zhang, Nianzu Zheng, Qiang Li, Rongju Ruan, Shengjun Cheng, Tianyu Guo, Wei He, Wei Li, Weiwen Liu, Wulong Liu, Xinyi Dai, Yonghan Dong, Yu Pan, Yue Li, Yufei Wang, Yujun Li, Yunsheng Ni, Zhe Liu, Zhenhe Zhang, Zhicheng Liu", + "bbox": [ + 143, + 167, + 854, + 239 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 20 + } +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_model.json b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4cd9a25f18c2f9b211edbeace743218de9b1f8d6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_model.json @@ -0,0 +1,3377 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.147, + 0.04, + 0.231, + 0.065 + ], + "angle": 0, + "content": "Pangu" + }, + { + "type": "header", + "bbox": [ + 0.721, + 0.056, + 0.854, + 0.069 + ], + "angle": 0, + "content": "TECHNICAL REPORT" + }, + { + "type": "title", + "bbox": [ + 0.161, + 0.103, + 0.838, + 0.151 + ], + "angle": 0, + "content": "PANGU ULTRA: PUSHING THE LIMITS OF DENSE LARGE LANGUAGE MODELS ON ASCEND NPUS" + }, + { + "type": "text", + "bbox": [ + 0.427, + 0.18, + 0.572, + 0.194 + ], + "angle": 0, + "content": "Pangu Team, Huawei" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.206, + 0.584, + 0.221 + ], + "angle": 0, + "content": "PanguTech@huawei.com" + }, + { + "type": "title", + "bbox": [ + 0.45, + 0.265, + 0.548, + 0.279 + ], + "angle": 0, + "content": "ABSTRACT" + }, + { + "type": "text", + "bbox": [ + 0.2, + 0.293, + 0.797, + 0.502 + ], + "angle": 0, + "content": "We present Pangu Ultra, a Large Language Model (LLM) with 135 billion parameters and dense Transformer modules trained on Ascend Neural Processing Units (NPUs). Although the field of LLM has been witnessing unprecedented advances in pushing the scale and capability of LLM in recent years, training such a large-scale model still involves significant optimization and system challenges. To stabilize the training process, we propose depth-scaled sandwich normalization, which effectively eliminates loss spikes during the training process of deep models. We pre-train our model on 13.2 trillion diverse and high-quality tokens and further enhance its reasoning capabilities during post-training. To perform such large-scale training efficiently, we utilize 8,192 Ascend NPUs with a series of system optimizations. Evaluations on multiple diverse benchmarks indicate that Pangu Ultra significantly advances the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse model structure contains much more parameters. Our exploration demonstrates that Ascend NPUs are capable of efficiently and effectively training dense models with more than 100 billion parameters. Our model and system will be available for our commercial customers." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.521, + 0.285, + 0.537 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.553, + 0.856, + 0.663 + ], + "angle": 0, + "content": "Large Language Models (LLMs) have transformed the landscape and our understanding of Artificial Intelligence. Their remarkable capabilities are enabling more and more AI applications, bringing numerous commercial opportunities. Unsurprisingly, teams are racing to push the scaling law to create models with more and more parameters. Although the Transformer [68] structure is a popular choice for large models, it is still debatable whether the models should be sparse or dense. With more than 100 billion parameters, sparse architectures powered by Mixture of Experts (MoE), such as DeepSeek [46, 19], have demonstrated surreal human-like language and thinking abilities [36], which makes sparse models a popular choice when pushing the limit of LLMs." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.67, + 0.855, + 0.781 + ], + "angle": 0, + "content": "At the same time, dense models, such as the Qwen [11, 72], Llama [25], and Gemma [67] series, are currently popular among models with fewer than 100 billion parameters thanks to their strong performance in specific skills and ease of deployment. The parameters in dense models are usually easier to optimize, while the dynamic components in sparse models usually need to turn to additional heuristics for stable training. In addition, the dense model structures at inference time make it easier to optimize system performance due to deterministic parameter usage. In this study, we aim to further explore the potential of dense models at large scales and show the performance of dense models can be on par with state-of-the-art MoE models on diverse tasks." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.787, + 0.856, + 0.913 + ], + "angle": 0, + "content": "The numbers of model parameters and layers are two crucial dimensions to release the full potential of dense models. While model parameter count is critical for model performance and plays a central role in scaling laws [38], recent studies [73, 50] suggest that model depth has a significant impact on reasoning capabilities. However, our exploration in those two aspects poses significant challenges in exploring the limits of those two aspects. Deeper models usually introduce unstable training, manifested as spikes in training loss curves. Experimental observations suggest that those spikes can knock our model out of the ideal parameter landscape and cause irreparable damage to the training process. Meanwhile, training hundreds of billions of parameters in dense models requires orchestrating thousands of AI processors, which poses significant system efficiency challenges." + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.277, + 0.061, + 0.718 + ], + "angle": 270, + "content": "arXiv:2504.07866v2 [cs.CL] 11 Apr 2025" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.144, + 0.091, + 0.856, + 0.244 + ], + "angle": 0, + "content": "For our exploration, we introduce Pangu Ultra, a dense Transformer architecture with 135 billion parameters and 94 layers. The model setup is at the forefront scale of the top performing dense models [11, 72, 25, 67]. Regarding challenges of training deep models, we hypothesize that the loss spikes are due to gradient fluctuations, which in turn hinder convergence rates and may lead to training divergence. Therefore, we propose two techniques, the depth-scaled sandwich norm and tiny initialization, both of which are designed to maintain stable gradient norms. Specifically, we first replace pre-layer norm [47] with the sandwich norm [20] and scaled initialization values in the post-layer normalization based on the model's depth. This depth-based adjustment helps control the range of gradient fluctuations effectively. In addition, we scale the standard deviation of weight initialization according to the model's width and depth, leading to tiny initialization. These two techniques lead to more stable gradients throughout the training process, eliminating loss spikes during the training of Pangu Ultra, and improving overall model performance." + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.25, + 0.855, + 0.361 + ], + "angle": 0, + "content": "In practice, we pre-train Pangu Ultra on 13.2 trillion tokens of our built corpus. In the pre-training stage, we use three phrases of data corpus each with a distinct data recipe. The design principles behind three phrases are first to help the model develop knowledge and linguistics, and then to directly equip it with reasoning ability, and finally to boost it on actively learning to reason. The model context window is gradually extended from 4K to 128K. In the post-training stage, we begin with applying efficient supervised fine-tuning (SFT) for a cold start, utilizing a carefully curated set of instruction data. Following this, Pangu Ultra undergoes further optimization through Reinforcement Learning (RL). The overall training of Pangu Ultra is stable in this process." + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.367, + 0.856, + 0.575 + ], + "angle": 0, + "content": "To handle large-scale model training of more than 100 billion parameters, we utilize a large-scale computing cluster consisting of 8,192 Ascend NPUs and employ a series of system optimization to improve the system efficiency. The primary challenge is minimizing pipeline bubbles [29] at large scales, which arise due to batch size constraints [35]. We take advantage of the typical 4 types of parallelism on our Ascend cluster, that is, Data Parallelism (DP), Tensor Parallelism (TP) [63], Sequence Parallelsim [39] and Pipeline Parallelism (PP) [30, 51]. As the training cluster scales up, the mini-batch size allocated to each DP decreases, leading to an increased pipeline bubble ratio. To mitigate this issue, we employ additional virtual pipeline (VPP) scheduling [52] with fine-grained tuning to ensure load balancing and reduce the PP bubble ratio from \\(30.45\\%\\) to \\(6.8\\%\\). The second challenge is to achieve high training efficiency for long sequences. Both attention mask generation and self-attention computation are time- and memory-intensive, particularly for long contexts. We utilize a NPU Fusion Attention (NFA) operator [4, 18, 17] tailored for the Ascend NPUs, which supports reset attention mask scenarios and eliminates the need to construct the attention mask before calling the NFA, thus improving computational efficiency and reducing memory cost. Under the implementation of several fine-grained system optimization, we achieve a Model FLOPs Utilization (MFU) [14] of over \\(50\\%\\) when training Pangu Ultra on 8,192 Ascend NPUs." + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.581, + 0.855, + 0.651 + ], + "angle": 0, + "content": "On public evaluation benchmarks, Pangu Ultra outperforms existing dense LLMs including Llama 405B and Mistral Large 2 123B on almost all major language tasks, and achieves competitive results with sparse models consisting of more than 500 billion parameters. These results indicate the potential of dense model capabilities is still promising to explore. Pangu Ultra also demonstrates that the Ascend NPUs are suitable for exploring the full capabilities of large-scale dense language models." + }, + { + "type": "title", + "bbox": [ + 0.145, + 0.673, + 0.341, + 0.689 + ], + "angle": 0, + "content": "2 Model Architecture" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.706, + 0.855, + 0.762 + ], + "angle": 0, + "content": "The basic architecture of Pangu Ultra is similar to Llama 3 [25]. It has 135 billion parameters with a hidden dimension of 12,288, a SwiGLU [60] feed-forward network (FFN) intermediate size of 28,672, and 94 layers. The attention blocks in Pangu Ultra leverage Group Query Attention (GQA) to reduce KV-cache size by incorporating 96 query heads and 8 KV heads." + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.768, + 0.855, + 0.824 + ], + "angle": 0, + "content": "There are two crucial differences to address the fundamental challenges of training stability and convergence in large dense LLMs. We propose Depth-Scaled Sandwich-Norm to replace the layer normalization and TinyInit for parameter initialization. By integrating these techniques, Pangu Ultra achieves substantial improvements over previous dense models." + }, + { + "type": "title", + "bbox": [ + 0.145, + 0.843, + 0.395, + 0.858 + ], + "angle": 0, + "content": "2.1 Depth-Scaled Sandwich-Norm" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.87, + 0.855, + 0.912 + ], + "angle": 0, + "content": "Large-scale dense models typically adopt deeper architectures [22], although MoE models usually scale in width [19]. However, increased depth introduces greater challenges in maintaining training stability. Given the prohibitive cost of pre-training, stable training of large dense LLMs becomes paramount. Pre-Layer" + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.937, + 0.504, + 0.947 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.141, + 0.092, + 0.855, + 0.135 + ], + "angle": 0, + "content": "Normalization (Pre-LN) has been found to make back-propagation more efficient for deep Transformers [69], leading to its widespread adoption in Transformer-based large language model (LLM) architectures [22, 11, 19]." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.14, + 0.855, + 0.211 + ], + "angle": 0, + "content": "However, in models employing the pre-LN structure, the fluctuating output scale of each sub-layer can easily lead to training instability [66]. To address this issue, sandwich-norm [20] applies an layer normalization to each sub-layer's output prior to the residual connection. While the sandwich-norm maintains the scale stability of individual sub-layer outputs, the progressive accumulation of output norms via residual connections across multiple layers may nevertheless lead to training instability." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.216, + 0.856, + 0.288 + ], + "angle": 0, + "content": "To mitigate this, we present the depth-scaled sandwich norm, which integrates the sandwich norm with a depth-scaled initialization scheme. The layer normalization regulates layer-wise output magnitudes through trainable gamma parameters, which are initialized with values scaled proportionally to the inverse of network depth. Figure 1 illustrates the differences between the depth-scaled sandwich-norm and pre-norm architectures. The formula of depth-scaled sandwich-norm is" + }, + { + "type": "equation", + "bbox": [ + 0.308, + 0.304, + 0.855, + 0.338 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\text {a t t n}}, \\operatorname {A T T N} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\text {a t t n}} = \\frac {c _ {\\text {a t t n}}}{\\sqrt {L}}, \\tag {1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.312, + 0.334, + 0.68, + 0.364 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\mathrm {m l p}}, \\operatorname {M L P} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\mathrm {m l p}} = \\frac {c _ {\\mathrm {m l p}}}{\\sqrt {L}},\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.376, + 0.856, + 0.418 + ], + "angle": 0, + "content": "where \\(L\\) is the number of layers, \\(c_{\\mathrm{attn}}\\) and \\(c_{\\mathrm{mlp}}\\) are set as the initial output standard deviations of the attention layer and feed-forward network (FFN) layer, respectively. For Pangu Ultra, we set \\(c_{\\mathrm{attn}}\\) to 0.283 and \\(c_{\\mathrm{mlp}}\\) to 0.432." + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.439, + 0.401, + 0.613 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.472, + 0.439, + 0.776, + 0.612 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.624, + 0.858, + 0.681 + ], + "angle": 0, + "content": "Figure 1: Structure comparison between Pre-Layer Norm (Pre-LN) and Depth-Scaled Sandwich-Norm (DSSN). DSSN applies normalization layers to both before and after the attention and FFN block, while Pre-LN only utilizes one normalization layer. DSSN also employs a depth-scaled initialization schema, which is not in the original sandwich norm." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.711, + 0.323, + 0.725 + ], + "angle": 0, + "content": "2.2 Model Initialization" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.737, + 0.856, + 0.809 + ], + "angle": 0, + "content": "Existing works [53] observe that model initialization plays a crucial role in training stability and performance. Transformer-based LLMs widely adopt small initialization[53], which initialize all the weight with a normal distribution of standard deviation \\(\\sqrt{\\frac{2}{5d}}\\), where \\(d\\) is the hidden dimension. It's also common practice to scale the weights of residual layers at initialization by a factor of \\(1 / \\sqrt{L}\\) [57], where \\(L\\) is the number of layers." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.814, + 0.856, + 0.879 + ], + "angle": 0, + "content": "Our findings suggest that scaling initialization by both model depth and width, using \\(\\sqrt{\\frac{1}{2dL}}\\), leads to faster loss convergence and improved performance on downstream tasks. We call this initialization method TinyInit. We hypothesize that TinyInit achieves more consistent parameter scales across the model, which may facilitate optimization and convergence." + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.884, + 0.856, + 0.914 + ], + "angle": 0, + "content": "Research [66] indicates that embedding layers require different initialization strategies compared to other layers. Specifically, maintaining the standard deviation of embedding weights close to 1 may enhance training" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.142, + 0.092, + 0.855, + 0.123 + ], + "angle": 0, + "content": "stability. Our experimental results indicate that initializing with a standard deviation of 0.5 achieves good model performance." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.137, + 0.254, + 0.151 + ], + "angle": 0, + "content": "2.3 Tokenizer" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.163, + 0.855, + 0.248 + ], + "angle": 0, + "content": "The design of the tokenizer significantly impacts model performance. An optimal vocabulary balances domain coverage (handling diverse tasks such as text, math, and code) with efficiency (encoding data with fewer tokens). Common methods use Byte-Pair Encoding (BPE) [62] and SentencePiece [40] build vocabularies by directly computing word frequencies across the entire training dataset. However, this approach suffers from domain imbalance, as common domains such as general text dominate the vocabulary, while specialized domains such as math and code remain underrepresented due to their limited data volume." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.252, + 0.856, + 0.324 + ], + "angle": 0, + "content": "Pangu Ultra adopts a domain-aware vocabulary strategy. We perform independent frequency analyses across multiple domains including general Chinese, general English, code, and mathematics, generating distinct domain-specific vocabularies. These vocabularies are then merged and de-duplicated to form a unified vocabulary of 153,376 unique tokens, maintaining balanced representation across domains while preserving overall compression efficiency. Table 1 summarizes the detailed token distribution across different domains." + }, + { + "type": "table_caption", + "bbox": [ + 0.273, + 0.344, + 0.724, + 0.358 + ], + "angle": 0, + "content": "Table 1: Token distribution in the unified vocabulary of Pangu Ultra." + }, + { + "type": "table", + "bbox": [ + 0.296, + 0.359, + 0.704, + 0.508 + ], + "angle": 0, + "content": "
DomainNumber of TokensPercentage (%)
English68,01744.35
Chinese41,05326.77
Other30,57319.93
Latin-based languages4,5072.94
Arabic2,7551.80
Korean2,7331.78
Mathematics2,1391.39
Japanese1,5991.04
Total153,376100.00
" + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.543, + 0.308, + 0.56 + ], + "angle": 0, + "content": "3 Model Training" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.574, + 0.858, + 0.631 + ], + "angle": 0, + "content": "In this section, we present our training pipeline, which is similar to training state-of-the-art language models, e.g., DeepSeek-V3 [19] and Llama 3 [22]. The training process consists of three main stages: pre-training, long context extension, and post-training. Each stage has specific training strategies and data construction methods to gradually enhance the model capabilities." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.647, + 0.313, + 0.663 + ], + "angle": 0, + "content": "3.1 Pre-training Stage" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.672, + 0.856, + 0.716 + ], + "angle": 0, + "content": "We first introduce the data construction in the pre-training of Pangu Ultra, followed by the details of data verification. Then we elaborate the practical approach for the long context extension. The detailed pre-training hyper-parameters are finally presented." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.73, + 0.326, + 0.743 + ], + "angle": 0, + "content": "3.1.1 Data Construction" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.753, + 0.856, + 0.837 + ], + "angle": 0, + "content": "The pre-training corpus of Pangu Ultra contains high-quality and diverse 13.2T tokens produced by our tokenizer, as stated in Section 2.3. Table 2 shows the pre-training process is structured into three sequential phases: the general phase, the reasoning phase, and the annealing phase. These phases are designed to progressively develop general knowledge and linguistic capabilities, enhance reasoning skills, and further refine knowledge and behavior, respectively. The amount of data used in each phase is 12T, including 7.4T and 4.6T data in two distinct subphases, 0.8T, and 0.4T tokens." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.843, + 0.856, + 0.913 + ], + "angle": 0, + "content": "In the initial general training phase, we utilize a corpus focused on developing broad linguistic capabilities and general knowledge. This stage primarily consists of English and Chinese data collected from a diverse range of sources, including web pages, books, encyclopedias, etc. Data from the multilingual and various industrial domains is also incorporated. Based on our data quality assessment in Section 3.1.2, we perfer to use higher-quality data in the second sub-phrase than the first." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.337, + 0.099, + 0.658, + 0.113 + ], + "angle": 0, + "content": "Table 2: Data recipe of Pangu Ultra pre-training." + }, + { + "type": "table", + "bbox": [ + 0.317, + 0.113, + 0.678, + 0.217 + ], + "angle": 0, + "content": "
DatasetGeneralReasoningAnnealing
General English54%14%21%
General Chinese13%6%20%
Multi-lingual8%4%3%
Instruction2%11%20%
Math6%28%18%
Code17%37%18%
" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.25, + 0.855, + 0.307 + ], + "angle": 0, + "content": "In the second reasoning phase, we increase the proportion of high-quality and diverse mathematical and coding data—raising it to over \\(60\\%\\) of the corpus to enhance the reasoning capabilities of Pangu Ultra. The coding data includes both pure code and mixed text-code samples. The math data also involves a lot of English and Chinese texts. Moreover, LLM-generated synthetic data is widely incorporated to enrich the corpus." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.312, + 0.855, + 0.382 + ], + "angle": 0, + "content": "The third annealing phrase is designed to help the model consolidate and effectively apply the knowledge and reasoning skills acquired in the previous stages. Therefore, we place greater emphasis on instruction data, which accounts for approximately \\(20\\%\\) of the corpus. We curate in-house question banks covering a wide range of topics and construct both short and long chain-of-thought (CoT) responses. These reasoning paths are carefully refined to ensure clarity and logical coherence." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.388, + 0.856, + 0.445 + ], + "angle": 0, + "content": "Overall, the pre-training data for Pangu Ultra is carefully designed to ensure high quality, diversity, and minimal redundancy. We assign quality and difficulty labels to the data and adopt a curriculum-based sampling strategy for the reasoning data across all three phases—progressing from simpler examples to more complex ones throughout the training cycle." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.467, + 0.371, + 0.483 + ], + "angle": 0, + "content": "3.1.2 Data Quality Assessment" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.495, + 0.856, + 0.525 + ], + "angle": 0, + "content": "Data quality assessment plays a crucial role in enhancing the overall quality of the data. Training Pangu Ultra employs both rule-based heuristics and model-based evaluation to enhance data quality." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.529, + 0.856, + 0.613 + ], + "angle": 0, + "content": "For model-based quality assessment, we leverage the Pangu series as the base model. To better align quality evaluation with human value judgments, we fine-tune the model using a manually annotated dataset. The fine-tuned evaluator is then applied to a large-scale pre-training corpus exceeding 10T tokens. Data samples are scored across multiple dimensions, including cleanliness, fluency, educational value, and richness. These annotated scores are then used in a prioritized sampling strategy, where higher-quality samples are assigned higher sampling probabilities." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.619, + 0.856, + 0.675 + ], + "angle": 0, + "content": "To validate the effectiveness of our data quality assessment, we conducted an ablation study using a proxy model with 2.6 billion parameters. Empirical results show that, to achieve comparable performance, the model trained on low-scoring data required \\(1.6 \\times\\) more tokens than the one trained on high-quality high-scoring data. Therefore, high data quality is important for improving training efficiency." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.698, + 0.367, + 0.713 + ], + "angle": 0, + "content": "3.1.3 Pre-training Parameters" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.725, + 0.856, + 0.782 + ], + "angle": 0, + "content": "Pangu Ultra is trained using AdamW optimizer [48] with a weight decay of 0.1 and epsilon is set to \\(1 \\times 10^{-8}\\). The momentum parameters are set to \\(\\beta_{1} = 0.9\\) and \\(\\beta_{2} = 0.95\\). The gradient clipping norm is set to 1.0. To improve the training stability and overall performance, the pre-training of Pangu Ultra is organized into the following phases:" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.787, + 0.856, + 0.843 + ], + "angle": 0, + "content": "0T-7.4T tokens The sequence length is set to 4K (RoPE base \\(= 1 \\times 10^{4}\\)). The batch size increases from 1,024 to 1,536 (at 1.2T) and 2,048 (at 1.9T). The increased batch size improves training efficiency and throughput. The learning rate follows a cosine decay from \\(1 \\times 10^{-4}\\) to \\(1 \\times 10^{-5}\\) with 4,000 warmup steps to ensure stable early training." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.849, + 0.854, + 0.877 + ], + "angle": 0, + "content": "7.4T-12.0T tokens The sequence length remains at 4K with a batch size of 2,048. The learning rate is fixed at \\(1 \\times 10^{-5}\\) in this phase." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.884, + 0.854, + 0.913 + ], + "angle": 0, + "content": "12.0T-12.8T tokens The sequence length increases to 8K (RoPE base \\(= 1 \\times 10^{5}\\)). The batch size is reduced to 1,536. The learning rate decays from \\(1 \\times 10^{-5}\\) to \\(7.5 \\times 10^{-6}\\) using cosine scheduling." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.092, + 0.355, + 0.108 + ], + "angle": 0, + "content": "3.2 Long Context Extension" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.123, + 0.855, + 0.179 + ], + "angle": 0, + "content": "The ability of LLMs to understand long context inputs is critical in long-thinking process and practical applications. In the final stages of pre-training, Pangu Ultra is trained on long sequence data to support a maximum context length of 128K. The training consists of two progressive phases: the first phase expands the context length to 32K, and the second phase further expands it to 128K." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.185, + 0.856, + 0.31 + ], + "angle": 0, + "content": "Rotary Position Embedding (RoPE) [64] is the core module for supporting ultra-long input sequences. Existing open-source LLMs typically extend context length by either increasing the base frequency in RoPE [64, 32] or by adopting methods such as YaRN [55, 22, 19]. Our findings show that both methods perform similarly well if the hyper-parameters are correctly chosen, and we adopt the increased base frequency method in Pangu Ultra. To determine the base frequency in RoPE for long-context extension, we evaluate the offline performance of \"Needle In A Haystack\" (NIAH) with different base frequencies at the target sequence length, and select the one with the best result. This ensures a relatively low initial loss in long-context training. In practice, the selected base frequency for \\(32\\mathrm{K}\\) is \\(1.6\\times 10^{6}\\), and for \\(128\\mathrm{K}\\) is \\(2.56\\times 10^{7}\\). Detailed hyper-parameters of Pangu Ultra long context training are summarized below:" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.315, + 0.855, + 0.346 + ], + "angle": 0, + "content": "8K to 32K phase The sequence length is expanded to 32K (RoPE base \\(= 1.6 \\times 10^{6}\\)). The batch size is 384 with a learning rate of \\(7.5 \\times 10^{-6}\\), matching the final learning rate from the previous post-training stage." + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.35, + 0.855, + 0.381 + ], + "angle": 0, + "content": "32K to 128K phase The sequence length is further expanded to \\(128\\mathrm{K}\\) (RoPE base \\(= 2.56 \\times 10^{7}\\)). The batch size is reduced to 96. The learning rate remains \\(7.5 \\times 10^{-6}\\)." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.41, + 0.354, + 0.426 + ], + "angle": 0, + "content": "3.3 Post-training Alignment" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.441, + 0.856, + 0.485 + ], + "angle": 0, + "content": "In the post-training stage, Pangu Ultra is aligned with human preferences through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This stage focuses on constructing high-quality, diverse instruction data and designing scalable, efficient training strategies." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.513, + 0.327, + 0.528 + ], + "angle": 0, + "content": "3.3.1 Post-training Data" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.542, + 0.856, + 0.627 + ], + "angle": 0, + "content": "In constructing post-training data, we emphasize the data quality, diversity, and complexity. The data pool is curated from a wide range of domains and task types, including general question answering, AI-generated content (AIGC), text classification and analysis, programming, mathematics, logical reasoning, and tool usage. These tasks cover application areas such as finance, healthcare, and public services. Data sources span open-source instruction datasets, real-world industrial queries, and synthetic problems derived from the pre-training corpus." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.631, + 0.856, + 0.689 + ], + "angle": 0, + "content": "To promote data diversity, data samples are selected along two orthogonal dimensions, guided by the entropy law [74]: domain and task type. Hierarchical tagging models with varying levels of granularity are used to support balanced data sampling. Data quality is managed through a combination of rule-based validation and model-based validation, which helps eliminate low-quality or ambiguous samples." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.693, + 0.856, + 0.738 + ], + "angle": 0, + "content": "To better stimulate the reasoning capabilities of Pangu Ultra, a large portion of the post-training data, approximately six-sevenths, consists of reasoning tasks such as mathematics, coding, and logic. The post-training data covers a range of complexities, with a focus on moderately to highly challenging tasks." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.765, + 0.352, + 0.781 + ], + "angle": 0, + "content": "3.3.2 Post-training Strategy" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.794, + 0.856, + 0.838 + ], + "angle": 0, + "content": "In the post-training stage, Pangu Ultra was first trained with SFT to establish preliminary instruction-following capabilities. Following SFT, we apply RL with outcome-based reward signals to further enhance reasoning, alignment, and instruction-following abilities of Pangu Ultra." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.843, + 0.856, + 0.913 + ], + "angle": 0, + "content": "We implement a latency-tolerant reinforcement learning framework optimized for the Ascend infrastructure, which will be detailed in a future report. The framework enables efficient large-scale policy optimization on Ascend. To guide the RL process, we implement a hybrid reward system that provides task-specific feedback for mathematics, coding, and general problem-solving. This hybrid reward system combines deterministic reward signals and model-based evaluations to facilitate stable and efficient policy optimization." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.506, + 0.948 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.09, + 0.315, + 0.108 + ], + "angle": 0, + "content": "4 Training System" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.123, + 0.858, + 0.195 + ], + "angle": 0, + "content": "Training our Pangu Ultra with 135B parameters on 13.2 trillion tokens necessitates the need to ensure training stability and efficiency in large-scale computing cluster. In this section, we elaborate the details of our training system from two important perspectives: parallelization strategies and system-level optimization techniques, in Section 4.2 and Section 4.3. Overall, we achieve over \\(52\\%\\) Model FLOPs Utilization (MFU) when training Pangu Ultra on 8,192 Ascend NPUs." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.211, + 0.308, + 0.227 + ], + "angle": 0, + "content": "4.1 Computing Setup" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.238, + 0.859, + 0.309 + ], + "angle": 0, + "content": "A computing cluster with 8,192 Ascend Neural Processing Units (NPUs) [5, 6] is deployed to train Pangu Ultra. Each node in the cluster houses 8 NPUs, interconnected via Huawei Cache Coherence System (HCCS) using a full-mesh topology, and each device is equipped with 64GB Memory. Inter-node communication is facilitated through RDMA over Converged Ethernet (RoCE) fabric, leveraging 200 Gbps interconnects for communication between NPUs across different nodes." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.327, + 0.465, + 0.343 + ], + "angle": 0, + "content": "4.2 Parallelism Strategies for Model Scaling" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.353, + 0.856, + 0.58 + ], + "angle": 0, + "content": "In order to scale model training\\(^1\\), we leverage a combination of different parallelism strategies to distributes the model across multiple NPUs, including Data Parallelism (DP) [43], Tensor Parallelism (TP) [63], Sequence Parallelism (SP) [39], and Pipeline Parallelism (PP) [30, 51]. For Pangu Ultra, 128-way DP with ZERO [58] is performed to reduce the memory cost of model parameters and the associated optimizer states. 8-way TP is applied to leverage the high intra-node bandwidth for efficient activation transfer, while 8-way PP is adopted to utilize inter-node connections, since it only requires transmitting activations at the partition boundaries. However, as mentioned in existing studies [35, 30, 51, 56], pipeline parallelism encounters severe PP bubbles when the training cluster scales up, primarily due to batch size constraints [35]. For one-forward-one-backward (1F1B) PP scheduling, the bubble ratio is defined as \\(\\frac{p - 1}{p - 1 + n}\\), where \\(p\\) represents the number of pipeline stages and \\(n\\) denotes the number of micro batches for every DP. The ratio represents the idle time of accelerators, as shown in Figure 2. A large-scale training cluster increases the number of DPs, which in turn reduces the number of micro batches assigned to each DP due to batch size constraints, leading to a significant increase in the bubble ratio. Therefore, minimizing bubble ratio is crucial for improving system efficiency. Under such circumstances, we employ interleaved pipeline-parallel scheduling with 6-way virtual PP stages on each device [52] and manage to reduce it from \\(30.45\\%\\) to \\(6.8\\%\\). Through careful tuning of load balancing across PP and VPP stages, we are able to achieve approximately \\(43\\%\\) MFU on an 8,192 NPU cluster as a baseline." + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.596, + 0.778, + 0.832 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.239, + 0.843, + 0.757, + 0.86 + ], + "angle": 0, + "content": "Figure 2: Pipeline parallelism and the interleaved pipeline-parallel scheduling." + }, + { + "type": "page_footnote", + "bbox": [ + 0.142, + 0.885, + 0.855, + 0.914 + ], + "angle": 0, + "content": "1The training of Pangu Ultra is supported by MindSpeed [8] and Megatron [7, 63] framework, which provides comprehensive parallel strategies and system optimization methods." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.506, + 0.948 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.092, + 0.331, + 0.108 + ], + "angle": 0, + "content": "4.3 System Optimization" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.118, + 0.858, + 0.203 + ], + "angle": 0, + "content": "Based on the optimizations outlined in Section 4.2 that achieved \\(43\\%\\) MFU, additional system-level enhancements are implemented to push training efficiency to new heights. Through a combination of kernel fusions, context parallelism via subsequence partitioning, data caching and sharing mechanisms, and other refinements, Pangu Ultra benefits from a significant improvement in training efficiency. These comprehensive optimizations enable the system to achieve over \\(52\\%\\) MFU, representing a \\(9\\%\\) relative improvement compared to the baseline configuration mentioned in Section 4.2." + }, + { + "type": "image", + "bbox": [ + 0.254, + 0.223, + 0.744, + 0.336 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.437, + 0.337, + 0.618, + 0.351 + ], + "angle": 0, + "content": "(b) The MC2 implementation" + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.364, + 0.854, + 0.394 + ], + "angle": 0, + "content": "Figure 3: A Comparison of the default transformer computation and the MC2 method. Note that in actual training, communication and computation tasks are fused into a single kernel in MC2." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.424, + 0.295, + 0.437 + ], + "angle": 0, + "content": "4.3.1 Kernel Fusion" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.45, + 0.856, + 0.506 + ], + "angle": 0, + "content": "Kernel fusion is widely adopted in LLM training to enhance efficiency. It combines multiple operations into a single kernel, reducing the number of data accesses to global memory [17]. During the training phase of Pangu Ultra, key operators are fused, resulting in significant improvements in hardware utilization and overall training efficiency." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.512, + 0.856, + 0.637 + ], + "angle": 0, + "content": "MC2 - Merged Compute and Communication Tensor parallelism, when combined with sequence parallelism, introduces All-Gather (AG) and Reduce-Scatter (RS) communication operations for exchanging input and output activations across distributed devices. This approach exhibits a direct dependency between matrix multiplication (MatMul) and AG/RS communications, which fundamentally constrains the overlapping of TP communication with computational workflows. The MC2 is implemented [2, 3] to tackle this challenge by fusing MatMul computations with communication operations. It decomposes large computation and communication tasks into fine-grained subtasks and employs pipelined execution to maximize overlap between communication and computation. Thus, MC2 significantly reduces communication latency and improves hardware utilization (Figure 3)." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.643, + 0.858, + 0.727 + ], + "angle": 0, + "content": "NPU Fusion Attention Training LLMs with long sequence length suffers from quadratic memory and computational requirements in self-attention mechanisms as sequence length grows. To address these challenges, Flash Attention (FA) has emerged as a standard technique in LLM training owing to its superior performance [18, 17]. Pangu Ultra leverages a self-attention fusion operator, called NPU Fusion Attention (NFA)[9], which is specifically optimized for Ascend NPUs, offering system-level improvements across a wide range of self-attention computation scenarios." + }, + { + "type": "image", + "bbox": [ + 0.364, + 0.746, + 0.645, + 0.887 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.257, + 0.894, + 0.741, + 0.91 + ], + "angle": 0, + "content": "Figure 4: Examples of attention mask compression for the NFA operator." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.141, + 0.092, + 0.855, + 0.218 + ], + "angle": 0, + "content": "It is worth mentioning that Pangu Ultra uses a reset attention mask strategy to prevent self-attention between different documents within a sequence. This requires calculating the corresponding attention mask for every sequence, leading to significant memory and computational overhead. To mitigate the time and memory requirements of generating attention masks, the NFA operator employs a mask compression optimization. As shown in Figure 4, NFA utilizes a \\(2048 \\times 2048\\) causal mask as a template to construct the computational mask within the fusion attention operator. For every iteration, Pangu Ultra retrieves the actual sequence length based on the position of the end-of-document (eod) token, which is then provided as input to the NFA operator to accelerate the computation of self-attention. The detailed usage of NFA is provided in the Ascend documentation [9]." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.223, + 0.855, + 0.294 + ], + "angle": 0, + "content": "Other Kernel Fusions for Efficiency In addition to MC2 and NPU-optimized fused attention, we also integrate a series of kernel fusion optimizations within key components such as RMSNorm [77], SwiGLU [60], and rotary positional embeddings (RoPE) [64], as well as critical processes including gradient accumulation and PP send/receive communications. These fusion operators are designed to reduce kernel launch and memory access overheads, while maintaining high numerical precision and enhancing overall training performance." + }, + { + "type": "image_caption", + "bbox": [ + 0.254, + 0.329, + 0.356, + 0.342 + ], + "angle": 0, + "content": "Causal Masking" + }, + { + "type": "image", + "bbox": [ + 0.144, + 0.351, + 0.307, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.457, + 0.241, + 0.471 + ], + "angle": 0, + "content": "(a) Original" + }, + { + "type": "image", + "bbox": [ + 0.325, + 0.351, + 0.489, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.354, + 0.457, + 0.434, + 0.471 + ], + "angle": 0, + "content": "(b) Megatron" + }, + { + "type": "image_caption", + "bbox": [ + 0.598, + 0.329, + 0.765, + 0.34 + ], + "angle": 0, + "content": "Reset of Attention Mask" + }, + { + "type": "image", + "bbox": [ + 0.509, + 0.351, + 0.674, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.544, + 0.457, + 0.622, + 0.471 + ], + "angle": 0, + "content": "(c) Megatron" + }, + { + "type": "image", + "bbox": [ + 0.691, + 0.352, + 0.849, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.736, + 0.457, + 0.786, + 0.471 + ], + "angle": 0, + "content": "(d) Ours" + }, + { + "type": "image_caption", + "bbox": [ + 0.198, + 0.485, + 0.796, + 0.5 + ], + "angle": 0, + "content": "Figure 5: Examples of the mechanism of sub-sequence partitioning for context parallelism." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.562, + 0.478, + 0.577 + ], + "angle": 0, + "content": "4.3.2 Optimization for Long Context Training" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.594, + 0.854, + 0.651 + ], + "angle": 0, + "content": "Scaling long-context capabilities is becoming increasingly important for applications such as long document summarization and conversational AI. However, training on long sequences presents several challenges in terms of both time and memory complexity. To improve the efficiency of long-context training, we propose two key strategies, as outlined below." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.657, + 0.854, + 0.782 + ], + "angle": 0, + "content": "Sub-Sequence Partitioning for Context Parallelism Context parallelism (CP) is an crucial approach for the training of very long sequences, that divides the input sequence into segments to reduce memory consumption [44, 33]. Yet, with causal masking, simply splitting the sequence into \\( CP \\) chunks results in a severely imbalanced workload for Ring Self-Attention (RSA) [44] (as shown in Figure 5(a)). Megatron-LM addresses this issue by splitting the sequence into \\( 2 \\times CP \\) chunks, where each rank receives chunks from both the top and bottom, thus balancing the workload within a CP group (Figure 5(b)) [7]. However, this method still results in an imbalanced workload when the attention mask is reset (Figure 5(c)). Therefore, in training with 128k-long contexts, we propose a load-balanced partitioning strategy for CP training, where each rank is responsible for computing two chunks within each subsequence (Figure 5(d))." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.787, + 0.854, + 0.913 + ], + "angle": 0, + "content": "Fast Mask Generation and Data Reuse When scaling the training sequence of Pangu Ultra up to 128k, the generation of the attention mask or the calculation of the actual sequence length still incurs a non-negligible performance overhead. Additionally, in the training scenario with reset attention masks, each VPP stage is required to retrieve the corresponding mask or actual sequence length in every iteration, resulting in redundant computations and increased overhead. We optimize these problems by (1) using efficient NPU operators to compute the attention mask, instead of constructing it on the CPU, thus accelerating mask generation and eliminating the need for data transfer between the CPU and NPU, and (2) enabling cross-VPP stage mask sharing, where attention masks are generated by the first stage (VPP0) and shared across different VPP stages on the same rank, thereby avoiding redundant mask computations and memory cost." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.09, + 0.239, + 0.106 + ], + "angle": 0, + "content": "5 Results" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.122, + 0.858, + 0.164 + ], + "angle": 0, + "content": "In this section, we discuss the evaluation results of Pangu Ultra, including pre-training performance and posttraining outcomes. In addition, we provide comprehensive ablation studies that exam the model architecture and further discuss the observations of training Pangu Ultra." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.181, + 0.423, + 0.197 + ], + "angle": 0, + "content": "5.1 Pre-Training Training Loss Curve" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.207, + 0.856, + 0.276 + ], + "angle": 0, + "content": "Figure 6 shows the training loss curve of Pangu Ultra during the entire pre-training. Each segment in the loss curve corresponds to one training stage, as described in Section 3.1.3. The loss curves demonstrate consistent descending trends across all training stages. For the second interval, although the descent rate moderated due to a constant learning rate, the performance metrics continued to show steady improvement throughout this interval." + }, + { + "type": "image", + "bbox": [ + 0.258, + 0.295, + 0.741, + 0.557 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.242, + 0.57, + 0.754, + 0.586 + ], + "angle": 0, + "content": "Figure 6: The training loss curve of Pangu Ultra during the pre-training stage." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.621, + 0.856, + 0.677 + ], + "angle": 0, + "content": "Zero loss spike As shown in Figure 6, no loss spikes occur throughout the entire pre-training process. While such spikes are common in LLM training [66], the absence of them here underscores the importance of our depth-scaled sandwich norm and TinyInit in ensuring stable training. The negative effect of loss spike to the model performance will be further elaborated in Section 5.4.1." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.693, + 0.318, + 0.709 + ], + "angle": 0, + "content": "5.2 Pre-Training Stage" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.72, + 0.856, + 0.777 + ], + "angle": 0, + "content": "Benchmarks We evaluate Pangu Ultra base model across multiple domains using open-source benchmarks, including language understanding, question answering, code generation, and math problem solving. The evaluation mainly uses English and Chinese test sets, with some additional multilingual benchmarks for broader coverage." + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.789, + 0.855, + 0.831 + ], + "angle": 0, + "content": "- Language understanding: We employ Hellaswag [76] and Winogrande for contextual reasoning tasks, DROP [21], RACE [42], and ARC [15] series for comprehensive reading comprehension evaluation, along with PIQA [12], Natural Questions [41] and TriviaQA [37] to assess knowledge retrieval." + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.836, + 0.855, + 0.879 + ], + "angle": 0, + "content": "- Question answering: The assessment includes C-Eval [31] for Chinese knowledge, MMLU [27] and its advanced variant MMLU-Pro [70] for English domain knowledge, supplemented by BigBenchHard [65] to evaluate creative problem-solving" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.884, + 0.855, + 0.913 + ], + "angle": 0, + "content": "- Code generation and understanding: We utilize HumanEval [13] and MBPP [10] for standard code generation tasks, while CruxEval [26] for code understanding and reasoning." + }, + { + "type": "list", + "bbox": [ + 0.142, + 0.789, + 0.855, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.51, + 0.948 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.143, + 0.092, + 0.855, + 0.135 + ], + "angle": 0, + "content": "- Mathematical Reasoning: We measure skills with \\( CMath \\) [71] and \\( GSM8K \\) [16] for fundamental arithmetic and simple problems, \\( MATH \\) [28] for advanced mathematical reasoning, and \\( MGSM \\) [61] for multilingual math problem solving." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.152, + 0.855, + 0.222 + ], + "angle": 0, + "content": "Baselines & Comparison Settings We compare Pangu Ultra against several strong baselines covers both dense models (Qwen2.5-72B, Llama-405B) and MoE architectures (DeepSeek-V3). For base models, the majority of our evaluations employ few-shot inputs, with a minority using zero-shot prompts. We evaluate most benchmarks with gold answers through exact matching, while employing execution-based verification for code generation tasks." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.241, + 0.855, + 0.297 + ], + "angle": 0, + "content": "Evaluation Results In Table 3, we compare the pre-trained base model of Pangu Ultra with other leading models. Overall, Pangu Ultra achieves state-of-the-art performance on most general English benchmarks and all Chinese benchmarks. While it trails DeepSeek V3 on code and math-related tasks, it performs competitively on these domains." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.303, + 0.856, + 0.374 + ], + "angle": 0, + "content": "A closer examination reveals that Pangu Ultra excels on Chinese benchmarks, surpassing both Qwen 2.5 72B and DeepSeek V3, the current best-performing Chinese model. In addition, when compared to Llama 3.1 405B, Pangu Ultra achieves better scores on most of the challenging benchmarks, while utilizing only about \\(29\\%\\) of the training FLOPs required by Llama 405B. These results suggest the effectiveness of our model architecture and the high quality of our training data." + }, + { + "type": "table_caption", + "bbox": [ + 0.141, + 0.398, + 0.855, + 0.44 + ], + "angle": 0, + "content": "Table 3: Comparison of Pangu Ultra and other representative models across a diverse set of benchmarks for evaluating language, coding and mathematical skills. Bold values represent the best results in each line, and underlined values represent Pangu Ultra is the best among dense models." + }, + { + "type": "table", + "bbox": [ + 0.209, + 0.442, + 0.785, + 0.899 + ], + "angle": 0, + "content": "
Benchmark (Metric)# ShotsQwen2.5 72B BaseLlama-3.1 405B BaseDeepSeek V3 BasePangu Ultra Base
Architecture-DenseDenseMoEDense
# Activated Params-72B405B37B135B
# Total Params-72B405B671B135B
EnglishBBH (EM)3-shot79.882.987.579.1
MMLU (EM)5-shot85.084.487.185.4
MMLU-Pro (EM)5-shot58.352.864.463.1
DROP (F1)3-shot80.686.089.061.0
ARC-Easy (EM)25-shot98.498.498.9100.0
ARC-Challenge (EM)25-shot94.595.395.397.0
HellaSwag (EM)10-shot84.889.288.999.0
PIQA (EM)0-shot82.685.984.798.0
WinoGrande (EM)5-shot82.385.284.991.0
RACE-Middle (EM)5-shot68.174.267.197.0
RACE-High (EM)5-shot50.356.851.397.0
TriviaQA (EM)5-shot71.982.782.990.5
NaturalQuestions (EM)5-shot33.241.540.052.7
AGIEval (EM)0-shot75.860.679.680.4
CodeHumanEval (Pass@1)0-shot53.054.965.281.1
MBPP (Pass@1)3-shot72.668.475.472
CRUXEval-I (EM)2-shot59.158.567.361.8
CRUXEval-O (EM)2-shot59.959.969.861.5
MathGSM8K (EM)8-shot88.383.589.389.3
MATH (EM)4-shot54.449.061.662.5
MGSM (EM)8-shot76.269.979.875.1
CMath (EM)3-shot84.577.390.778.2
ChineseCLUEWSC (EM)5-shot82.583.082.795.0
C-Eval (EM)5-shot89.272.590.190.3
CMMLU (EM)5-shot89.573.788.891.7
CMRC (EM)1-shot75.876.076.386.0
C3 (EM)0-shot76.779.778.699.0
CCPM (EM)0-shot88.578.692.093.0
" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.092, + 0.465, + 0.108 + ], + "angle": 0, + "content": "5.3 Post-Training and Reasoning Capability" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.117, + 0.855, + 0.148 + ], + "angle": 0, + "content": "Benchmarks We conduct a comprehensive evaluation of the Pangu Ultra's capabilities over reasoning and non-reasoning tasks:" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.158, + 0.857, + 0.202 + ], + "angle": 0, + "content": "- Sophisticated reasoning tasks encompass three specialized subcategories: mathematical competence measured by AIME 2024 [49] and MATH-500, Coding competition benchmarks LiveCodeBench [34] and scientific reasoning task GPQA Diamond [59];" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.205, + 0.855, + 0.234 + ], + "angle": 0, + "content": "- General language comprehension and reasoning capabilities, represented by MMLU-Pro [24], Arena Hard [45]." + }, + { + "type": "list", + "bbox": [ + 0.142, + 0.158, + 0.857, + 0.234 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.248, + 0.856, + 0.305 + ], + "angle": 0, + "content": "Baselines & Comparison Settings We compare Pangu Ultra against strong baselines including GPT-4o0513, reasoning models DeepSeek-R1, Hunyuan-T1 and large dense models, Qwen2.5-72B-Instruct and Mistral-Large 2. We use Pass@1 averaged over multiple independent runs as the evaluation metric to assess the performance." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.319, + 0.854, + 0.377 + ], + "angle": 0, + "content": "Evaluation Results In Table 4, we compare the evaluation results of Pangu Ultra with other baseline models. Pangu Ultra achieves state-of-the-art performance on the reasoning benchmarks including AIME 2024, MATH-500, GPQA and LiveCodeBench, while maintaining strong capabilities in general language comprehension tasks." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.381, + 0.856, + 0.44 + ], + "angle": 0, + "content": "When compared to dense LLMs (Qwen and Mistral-Large 2), Pangu Ultra shows particularly significant advantages in reasoning tasks. This superior performance stems from the 0.8T reasoning-focused data used in pre-training (Section 3.1.3). The reasoning-enhanced base model substantially benefits subsequent post-training phases." + }, + { + "type": "table_caption", + "bbox": [ + 0.142, + 0.459, + 0.854, + 0.488 + ], + "angle": 0, + "content": "Table 4: Comparison of Pangu Ultra models and other representative models across benchmarks. \\(\\dagger\\) indicates results from Artificial Analysis [1]." + }, + { + "type": "table", + "bbox": [ + 0.146, + 0.489, + 0.851, + 0.626 + ], + "angle": 0, + "content": "
ModelAIME 2024MATH-500GPQA DiamondLiveCode BenchArenaHardMMLU-pro
GPT-4o-05139.374.649.932.980.472.6
Qwen2.5-72B16.083.14927.681.272.0
Mistral-Large 2†11.073.648.629.3-69.7
Hunyuan-T179.896.269.364.991.987.2
DeepSeek-R179.897.371.565.992.384.0
Pangu Ultra80.897.474.266.591.584.4
" + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.651, + 0.303, + 0.667 + ], + "angle": 0, + "content": "5.4 Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.677, + 0.856, + 0.707 + ], + "angle": 0, + "content": "This section presents additional ablation studies of the model architecture and analyzes key training behaviors to facilitate a deeper understanding and discussion of dense LLM training." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.72, + 0.405, + 0.736 + ], + "angle": 0, + "content": "5.4.1 Depth-scaled Sandwich-norm" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.744, + 0.854, + 0.789 + ], + "angle": 0, + "content": "We conducted experiments to validate the effectiveness of depth-scaled sandwich norm compared to pre-norm architectures. Using a dense Transformer model with 13 billion parameters trained on 300 billion tokens with identical hyperparameters for both configurations, we observe significant improvements." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.792, + 0.856, + 0.877 + ], + "angle": 0, + "content": "Figure 7 shows the depth-scaled sandwich-norm architecture stabilizes gradient norms and effectively eliminates loss spikes, leading to faster training convergence. We evaluated performance on two composite benchmarks: EN basic, consisting of multiple English benchmarks, and ZH basic, representing Chinese benchmarks. Additional evaluation using LAMBADA [54] (English) and WPLC [23] (Chinese) next-token prediction tasks confirmed the advantage of applying depth-scaled sandwich-norm. The results clearly suggest that preventing loss spikes during pre-training is crucial for optimal model performance." + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.881, + 0.856, + 0.914 + ], + "angle": 0, + "content": "To further ablate the effect of our depth-scaled factor in RMSNorm initialization, we compare with the plain sandwich-norm that does not have the \\(\\sqrt{L}\\) scaling factor in Eq. (1). Here, we use a proxy model containing 1.6" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.511, + 0.948 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.152, + 0.103, + 0.481, + 0.266 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.291, + 0.273, + 0.342, + 0.286 + ], + "angle": 0, + "content": "(a) Loss" + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.103, + 0.847, + 0.267 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.624, + 0.273, + 0.734, + 0.286 + ], + "angle": 0, + "content": "(b) Gradient norm" + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.294, + 0.858, + 0.337 + ], + "angle": 0, + "content": "Figure 7: Pre-training loss and gradient norm for a 13B model using Pre-LN and Depth-Scaled Sandwich-Norm (DSSN). The curves with Pre-LN has significant spikes, which harm the trained model, while the curves of DSSN are much smoother." + }, + { + "type": "table_caption", + "bbox": [ + 0.211, + 0.359, + 0.783, + 0.373 + ], + "angle": 0, + "content": "Table 5: Performance comparison between Pre-LN and Depth-scaled Sandwich-Norm." + }, + { + "type": "table", + "bbox": [ + 0.183, + 0.374, + 0.816, + 0.429 + ], + "angle": 0, + "content": "
ModelTokens (B)EN basicZH basicLAMBADAWPLC
Pre-LN3000.420.520.6750.194
Depth-scaled sandwich-norm3000.450.540.6930.224
" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.452, + 0.856, + 0.508 + ], + "angle": 0, + "content": "billion parameters and 94 layers, which has the same depth with Pangu Ultra. By using this proxy model, we examine the effectiveness of sandwich-norm on training very deep Transformers. In Figure 8, we can observe some loss spikes with the plain sandwich-norm, but our depth-scaled sandwich-norm can be trained smoothly, and attains lower loss." + }, + { + "type": "image", + "bbox": [ + 0.292, + 0.523, + 0.71, + 0.735 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.745, + 0.854, + 0.777 + ], + "angle": 0, + "content": "Figure 8: Pre-training loss for a 94-layer 1.6B model using original and depth-scaled sandwich-norm. The original sandwich-norm still suffers loss spikes during training." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.8, + 0.323, + 0.815 + ], + "angle": 0, + "content": "5.4.2 Tiny Initialization" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.824, + 0.854, + 0.913 + ], + "angle": 0, + "content": "We conduct experiments to study the effectiveness of TinyInit proposed in Section 2.2. After being trained on 102 billion tokens, Pangu Ultra initialized with TinyInit strategy, with standard deviation \\(\\sqrt{\\frac{1}{2dL}}\\), performs significantly better than the baseline model that utilizes traditional initialization, whose standard deviations are \\(\\sqrt{\\frac{2}{5d}}\\) and \\(\\sqrt{\\frac{2}{5dL}}\\). The results are shown in Table 6. BIG-bench (aug) is a test set developed internally through data augmentation of the original BIG-bench, designed to mitigate the impact of test set leakage." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.253, + 0.098, + 0.744, + 0.112 + ], + "angle": 0, + "content": "Table 6: Performance comparison of traditional initialization and TinyInit." + }, + { + "type": "table", + "bbox": [ + 0.144, + 0.113, + 0.87, + 0.168 + ], + "angle": 0, + "content": "
ModelTokens (B)EN basicZH basicLAMBADAWPLCC-EvalMMLUBIG-bench (aug)
Baseline1020.4440.5380.6940.2290.4760.4730.357
TinyInit1020.4560.5370.7270.2570.5240.5020.384
" + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.191, + 0.411, + 0.207 + ], + "angle": 0, + "content": "5.4.3 Layer Statistics of Pangu Ultra" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.215, + 0.856, + 0.314 + ], + "angle": 0, + "content": "Stable activation scale Figure 9 presents the activation patterns of attention and FFN modules across Transformer layers, showing the mean, standard deviation, and top-1 activation values. The activation distributions demonstrate stability, with standard deviations maintaining consistent scales throughout pretraining while preserving a clear layer-wise pattern. Our analysis reveals the presence of \"super activations\", whose magnitude reaches \\(10^{3}\\) magnitude in shallow layers, a phenomenon consistent with findings in the Llama model [75]. Notably, Figure 9 illustrates that these top-1 activation values progressively decrease with layer depth, indicating that their influence becomes relatively small on the final output." + }, + { + "type": "image", + "bbox": [ + 0.143, + 0.324, + 0.328, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.324, + 0.501, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.324, + 0.678, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.678, + 0.324, + 0.853, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.142, + 0.458, + 0.328, + 0.592 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.19, + 0.595, + 0.311, + 0.608 + ], + "angle": 0, + "content": "(a) Down projection" + }, + { + "type": "image", + "bbox": [ + 0.328, + 0.458, + 0.502, + 0.591 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.35, + 0.595, + 0.487, + 0.608 + ], + "angle": 0, + "content": "(b) Up & Gate projection" + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.458, + 0.678, + 0.591 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.595, + 0.681, + 0.608 + ], + "angle": 0, + "content": "(c) Attention output projection" + }, + { + "type": "image", + "bbox": [ + 0.68, + 0.458, + 0.853, + 0.591 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.693, + 0.595, + 0.853, + 0.608 + ], + "angle": 0, + "content": "(d) Attention QKV projection" + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.615, + 0.854, + 0.685 + ], + "angle": 0, + "content": "Figure 9: Activation of attention and FFN modules. Mean, standard deviation, and top-1 value of activations are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. The \"Std\" row shows the stable activation scale across layers. The \"Top 1\" row shows the existence of the \"super activations\" in down projection and attention output projection, with magnitudes falling within a reasonable range and comparable to those observed in the LLaMA model [75]." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.708, + 0.854, + 0.807 + ], + "angle": 0, + "content": "Layer-wise patterns of depth-scaled sandwich norm. Figure 10 presents the distribution of scaling parameters \\(\\gamma\\) across all sandwich-norm layers, revealing several key observations: All four LayerNorm \\(\\gamma\\) parameters exhibit decreasing mean/standard deviation during training, consistent with weight decay effects. Post-norm \\(\\gamma\\) values show layer-dependent patterns: The standard deviation of post-norm \\(\\gamma\\) increases substantially with layer depth. Pre-norm \\(\\gamma\\) maintains relatively constant standard deviation across layers. This pattern suggests an intriguing model behavior: shallow layers rely primarily on residual connections, while deeper layers progressively emphasize transformer layer outputs as the scaling factor \\(\\gamma\\) grows in magnitude." + }, + { + "type": "title", + "bbox": [ + 0.142, + 0.826, + 0.271, + 0.84 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.856, + 0.856, + 0.913 + ], + "angle": 0, + "content": "We present Pangu Ultra, a dense language foundation model with 135 billion parameters trained on Ascend NPUs. To address challenges in training large-scale deep models, we propose depth-scaled sandwich-norm, enabling Pangu Ultra to achieve remarkable training stability without significant loss spikes. After being pre-trained on 13.2 trillion tokens and long context extension on 8,192 Ascend NPUs, our model further" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.141, + 0.087, + 0.328, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.087, + 0.505, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.087, + 0.679, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.679, + 0.087, + 0.857, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.144, + 0.223, + 0.327, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.16, + 0.359, + 0.328, + 0.372 + ], + "angle": 0, + "content": "(a) Post-norm after attention" + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.223, + 0.505, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.358, + 0.359, + 0.491, + 0.372 + ], + "angle": 0, + "content": "(b) Post-norm after FFN" + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.222, + 0.681, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.516, + 0.359, + 0.681, + 0.372 + ], + "angle": 0, + "content": "(c) Post-norm before attention" + }, + { + "type": "image", + "bbox": [ + 0.676, + 0.223, + 0.857, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.707, + 0.359, + 0.85, + 0.372 + ], + "angle": 0, + "content": "(d) Post-norm before FFN" + }, + { + "type": "image_caption", + "bbox": [ + 0.141, + 0.379, + 0.855, + 0.437 + ], + "angle": 0, + "content": "Figure 10: Distribution of sandwich-norm's \\(\\gamma\\) parameter. Mean and standard deviation are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. There is a clear layer-wise pattern of the two post-norms: the mean and std value of \\(\\gamma\\) increase with depth. Larger post-norm \\(\\gamma\\) indicates deeper layers emphasize more on transformer outputs instead of residual connections." + }, + { + "type": "text", + "bbox": [ + 0.141, + 0.463, + 0.854, + 0.547 + ], + "angle": 0, + "content": "enhances its reasoning capabilities through Supervised Fine-Tuning and Reinforcement Learning. Extensive experiments lead to the observation that Pangu Ultra not only surpasses state-of-the-art dense LLMs like Llama 405B and Mistral Large 2 but also delivers competitive performance against larger sparse models such as DeepSeek-R1. These results highlight the efficacy of our architectural and systemic optimizations, paving the way for future advancements in scalable and efficient LLM training. In addition, our experience demonstrates that the Ascend NPUs are capable of training dense models with hundreds of billions of parameters." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.567, + 0.24, + 0.582 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.152, + 0.591, + 0.57, + 0.608 + ], + "angle": 0, + "content": "[1] Artificial analysis. https://artificialanalysis.ai/." + }, + { + "type": "ref_text", + "bbox": [ + 0.152, + 0.612, + 0.856, + 0.64 + ], + "angle": 0, + "content": "[2] Ascend mc2. https://citee.com/qingfenxiaochong/MindSpeed/blob/master/docs/features/mc2.md." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.644, + 0.774, + 0.661 + ], + "angle": 0, + "content": "[3] Ascend mc2. https://www.hiasmend.com/developer/techArticles/20240613-1." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.664, + 0.671, + 0.679 + ], + "angle": 0, + "content": "[4] Flash attention. https://github.com/Dao-AILab/flash-attention." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.683, + 0.856, + 0.711 + ], + "angle": 0, + "content": "[5] Huawei atlas 800t a2. https://e.huawei.com/cn/products/computing/ascend/ atlas-800t-a2." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.717, + 0.855, + 0.758 + ], + "angle": 0, + "content": "[6] Huawei atlas 800t a2 technical specifications. https://support.huawei.com/enterprise/en/doc/EDOC1100349804/2bf2c017/technical-specifications?idPath=23710424|251366513|22892968|252309113|254184887." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.764, + 0.6, + 0.78 + ], + "angle": 0, + "content": "[7] Megatron-lm. https://github.com/NVIDIA/Megatron-LM." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.784, + 0.56, + 0.799 + ], + "angle": 0, + "content": "[8] Mindspeed. https://citee.com/ascend/MindSpeed." + }, + { + "type": "ref_text", + "bbox": [ + 0.153, + 0.804, + 0.855, + 0.833 + ], + "angle": 0, + "content": "[9] Npu fusion attention. https://www.hiasmend.com/document/detail/zh/Pytorch/60RC1/apiref/apilist/ptaoplist_000139.html." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.837, + 0.855, + 0.88 + ], + "angle": 0, + "content": "[10] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. ArXiv, abs/2108.07732, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.884, + 0.855, + 0.913 + ], + "angle": 0, + "content": "[11] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023." + }, + { + "type": "list", + "bbox": [ + 0.144, + 0.591, + 0.856, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.143, + 0.091, + 0.855, + 0.121 + ], + "angle": 0, + "content": "[12] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.143, + 0.124, + 0.857, + 0.263 + ], + "angle": 0, + "content": "[13] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.143, + 0.267, + 0.856, + 0.434 + ], + "angle": 0, + "content": "[14] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.437, + 0.856, + 0.48 + ], + "angle": 0, + "content": "[15] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.483, + 0.856, + 0.526 + ], + "angle": 0, + "content": "[16] Karl Cobbe, Vineet Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.529, + 0.856, + 0.559 + ], + "angle": 0, + "content": "[17] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Representations, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.562, + 0.856, + 0.618 + ], + "angle": 0, + "content": "[18] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.143, + 0.622, + 0.856, + 0.913 + ], + "angle": 0, + "content": "[19] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y. K. Li, Y. Q. Wang, Y. X. Wei, Y. X. Zhu, Yang Zhang, Yanhong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng" + }, + { + "type": "list", + "bbox": [ + 0.143, + 0.091, + 0.857, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.178, + 0.091, + 0.856, + 0.163 + ], + "angle": 0, + "content": "Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.167, + 0.857, + 0.223 + ], + "angle": 0, + "content": "[20] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 19822-19835. Curran Associates, Inc., 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.227, + 0.856, + 0.271 + ], + "angle": 0, + "content": "[21] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.275, + 0.854, + 0.317 + ], + "angle": 0, + "content": "[22] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.323, + 0.854, + 0.365 + ], + "angle": 0, + "content": "[23] Huibin Ge, Chenxi Sun, Deyi Xiong, and Qun Liu. Chinese wplc: A chinese dataset for evaluating pretrained language models on word prediction given long-range context. In Conference on Empirical Methods in Natural Language Processing, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.37, + 0.856, + 0.412 + ], + "angle": 0, + "content": "[24] Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, et al. Are we done with mmlu? arXiv preprint arXiv:2406.04127, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.417, + 0.856, + 0.459 + ], + "angle": 0, + "content": "[25] Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.465, + 0.856, + 0.493 + ], + "angle": 0, + "content": "[26] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida Wang. Cruxeval: A benchmark for code reasoning, understanding and execution. ArXiv, abs/2401.03065, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.498, + 0.856, + 0.527 + ], + "angle": 0, + "content": "[27] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.531, + 0.856, + 0.573 + ], + "angle": 0, + "content": "[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. *ArXiv*, abs/2103.03874, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.578, + 0.856, + 0.622 + ], + "angle": 0, + "content": "[29] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.626, + 0.856, + 0.696 + ], + "angle": 0, + "content": "[30] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 103-112, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.701, + 0.856, + 0.756 + ], + "angle": 0, + "content": "[31] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Fanchao Qi, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. ArXiv, abs/2305.08322, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.761, + 0.856, + 0.818 + ], + "angle": 0, + "content": "[32] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.822, + 0.856, + 0.865 + ], + "angle": 0, + "content": "[33] Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, and Yuxiong He. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.87, + 0.856, + 0.913 + ], + "angle": 0, + "content": "[34] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024." + }, + { + "type": "list", + "bbox": [ + 0.144, + 0.091, + 0.857, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.091, + 0.856, + 0.163 + ], + "angle": 0, + "content": "[35] Ziheng Jiang, Haibin Lin, Yinmin Zhong, Qi Huang, Yangrui Chen, Zhi Zhang, Yanghua Peng, Xiang Li, Cong Xie, Shibiao Nong, Yulu Jia, Sun He, Hongmin Chen, Zhihao Bai, Qi Hou, Shipeng Yan, Ding Zhou, Yiyao Sheng, Zhuo Jiang, Haohan Xu, Haoran Wei, Zhang Zhang, Pengfei Nie, Leqi Zou, Sida Zhao, Liang Xiang, Zherui Liu, Zhe Li, Xiaoying Jia, Jianxi Ye, Xin Jin, and Xin Liu. Megascale: Scaling large language model training to more than 10,000 gpus, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.166, + 0.855, + 0.194 + ], + "angle": 0, + "content": "[36] Cameron R Jones and Benjamin K Bergen. Large language models pass the Turing test. arXiv preprint arXiv:2503.23674, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.199, + 0.855, + 0.228 + ], + "angle": 0, + "content": "[37] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. ArXiv, abs/1705.03551, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.232, + 0.855, + 0.275 + ], + "angle": 0, + "content": "[38] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.279, + 0.855, + 0.308 + ], + "angle": 0, + "content": "[39] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.311, + 0.855, + 0.368 + ], + "angle": 0, + "content": "[40] Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Eduardo Blanco and Wei Lu, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.372, + 0.856, + 0.442 + ], + "angle": 0, + "content": "[41] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc V. Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.446, + 0.855, + 0.475 + ], + "angle": 0, + "content": "[42] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. Race: Large-scale reading comprehension dataset from examinations. ArXiv, abs/1704.04683, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.478, + 0.855, + 0.522 + ], + "angle": 0, + "content": "[43] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training. CoRR, abs/2006.15704, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.525, + 0.855, + 0.582 + ], + "angle": 0, + "content": "[44] Shenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, and Yang You. Sequence parallelism: Long sequence training from system perspective. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2391-2404, Toronto, Canada, July 2023. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.585, + 0.855, + 0.614 + ], + "angle": 0, + "content": "[45] Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. From live data to high-quality benchmarks: The arena-hard pipeline, April 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.618, + 0.855, + 0.661 + ], + "angle": 0, + "content": "[46] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.665, + 0.855, + 0.707 + ], + "angle": 0, + "content": "[47] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. In EMNLP (1), pages 5747-5763. Association for Computational Linguistics, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.711, + 0.855, + 0.74 + ], + "angle": 0, + "content": "[48] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.744, + 0.855, + 0.773 + ], + "angle": 0, + "content": "[49] MAA. Codeforces. American Invitational Mathematics Examination - AIME 2024, 2024. https://maa.org/math-competitions/american-invitational-mathematics-examination-aime." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.777, + 0.855, + 0.805 + ], + "angle": 0, + "content": "[50] William Merrill and Ashish Sabharwal. A little depth goes a long way: The expressive power of log-depth transformers, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.81, + 0.855, + 0.867 + ], + "angle": 0, + "content": "[51] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, Huntsville, ON, Canada, October 27-30, 2019, pages 1-15. ACM, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.87, + 0.855, + 0.913 + ], + "angle": 0, + "content": "[52] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training ongpu clusters using megatron-lm. In" + }, + { + "type": "list", + "bbox": [ + 0.144, + 0.091, + 0.856, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.178, + 0.092, + 0.855, + 0.121 + ], + "angle": 0, + "content": "Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '21, New York, NY, USA, 2021. Association for Computing Machinery." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.124, + 0.857, + 0.154 + ], + "angle": 0, + "content": "[53] Toan Q Nguyen and Julian Salazar. Transformers without tears: Improving the normalization of self-attention. arXiv preprint arXiv:1910.05895, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.158, + 0.854, + 0.2 + ], + "angle": 0, + "content": "[54] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and R. Fernández. The lambada dataset: Word prediction requiring a broad discourse context. ArXiv, abs/1606.06031, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.205, + 0.854, + 0.234 + ], + "angle": 0, + "content": "[55] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.238, + 0.821, + 0.253 + ], + "angle": 0, + "content": "[56] Penghui Qi, Xinyi Wan, Guangxing Huang, and Min Lin. Zero bubble pipeline parallelism, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.257, + 0.854, + 0.285 + ], + "angle": 0, + "content": "[57] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.145, + 0.29, + 0.855, + 0.346 + ], + "angle": 0, + "content": "[58] Samyam Rajbhandari, Jeff Rasley, Olatunj Ruwase, and Yuxiong He. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.35, + 0.855, + 0.393 + ], + "angle": 0, + "content": "[59] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.397, + 0.775, + 0.412 + ], + "angle": 0, + "content": "[60] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.416, + 0.854, + 0.459 + ], + "angle": 0, + "content": "[61] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. Language models are multilingual chain-of-thought reasoners. ArXiv, abs/2210.03057, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.463, + 0.854, + 0.506 + ], + "angle": 0, + "content": "[62] Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. 1999." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.51, + 0.855, + 0.539 + ], + "angle": 0, + "content": "[63] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.542, + 0.854, + 0.572 + ], + "angle": 0, + "content": "[64] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.576, + 0.855, + 0.632 + ], + "angle": 0, + "content": "[65] Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. In Annual Meeting of the Association for Computational Linguistics, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.636, + 0.854, + 0.665 + ], + "angle": 0, + "content": "[66] Sho Takase, Shun Kiyono, Sosuke Kobayashi, and Jun Suzuki. Spike no more: Stabilizing the pre-training of large language models, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.67, + 0.855, + 0.712 + ], + "angle": 0, + "content": "[67] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.716, + 0.854, + 0.758 + ], + "angle": 0, + "content": "[68] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.763, + 0.855, + 0.819 + ], + "angle": 0, + "content": "[69] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In Anna Korhonen, David Traum, and Lluis Márquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy, July 2019. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.823, + 0.855, + 0.879 + ], + "angle": 0, + "content": "[70] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max W.F. Ku, Kai Wang, Alex Zhuang, Rongqi \"Richard\" Fan, Xiang Yue, and Wenhu Chen. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. *ArXiv*, abs/2406.01574, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.146, + 0.884, + 0.854, + 0.913 + ], + "angle": 0, + "content": "[71] Tianwen Wei, Jian Luan, W. Liu, Shuang Dong, and Bin Quan Wang. Cmath: Can your language model pass chinese elementary school math test? ArXiv, abs/2306.16636, 2023." + }, + { + "type": "list", + "bbox": [ + 0.144, + 0.092, + 0.857, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.091, + 0.855, + 0.135 + ], + "angle": 0, + "content": "[72] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.137, + 0.855, + 0.168 + ], + "angle": 0, + "content": "[73] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.17, + 0.855, + 0.213 + ], + "angle": 0, + "content": "[74] Mingjia Yin, Chuhan Wu, Yufei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong Liu, Ruiming Tang, Defu Lian, and Enhong Chen. Entropy law: The story behind data compression and llm performance. arXiv preprint arXiv:2407.06645, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.216, + 0.855, + 0.247 + ], + "angle": 0, + "content": "[75] Mengxia Yu, De Wang, Qi Shan, Colorado Reed, and Alvin Wan. The super weight in large language models. ArXiv, abs/2411.07191, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.249, + 0.855, + 0.279 + ], + "angle": 0, + "content": "[76] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.282, + 0.687, + 0.298 + ], + "angle": 0, + "content": "[77] Biao Zhang and Rico Sennrich. Root mean square layer normalization, 2019." + }, + { + "type": "list", + "bbox": [ + 0.144, + 0.091, + 0.855, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.145, + 0.09, + 0.493, + 0.109 + ], + "angle": 0, + "content": "A Contributions and Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.121, + 0.856, + 0.164 + ], + "angle": 0, + "content": "Core Contributors Yichun Yin, Wenyong Huang, Kaikai Song, Yehui Tang, Xueyu Wu, Wei Guo, Peng Guo, Yaoyuan Wang, Xiaojun Meng, Yasheng Wang, Dong Li, Can Chen, Dandan Tu, Yin Li, Fisher Yu, Ruiming Tang, Yunhe Wang" + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.169, + 0.856, + 0.24 + ], + "angle": 0, + "content": "Contributors Baojun Wang, Bin Wang, Bo Wang, Boxiao Liu, Changzheng Zhang, Duyu Tang, Fei Mi, Hui Jin, Jiansheng Wei, Jiarui Qin, Jinpeng Li, Jun Zhao, Liqun Deng, Lin Li, Minghui Xu, Naifu Zhang, Nianzu Zheng, Qiang Li, Rongju Ruan, Shengjun Cheng, Tianyu Guo, Wei He, Wei Li, Weiwen Liu, Wulong Liu, Xinyi Dai, Yonghan Dong, Yu Pan, Yue Li, Yufei Wang, Yujun Li, Yunsheng Ni, Zhe Liu, Zhenhe Zhang, Zhicheng Liu" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "21" + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_origin.pdf b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..24fd4ddee9740cc86953f4791749ab0ac77416f0 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/db4d652b-f5d0-4008-97fa-5ff3dca4208f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b176b75dd53628f20f97b7133b8b90ff5aeadd23341d7c84eb9ed4060be4d3ba +size 2623414 diff --git a/data/2025/2504_07xxx/2504.07866/full.md b/data/2025/2504_07xxx/2504.07866/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d2c0a646653f5b5fec5d1e2aeceb19dce104f7eb --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/full.md @@ -0,0 +1,440 @@ +# PANGU ULTRA: PUSHING THE LIMITS OF DENSE LARGE LANGUAGE MODELS ON ASCEND NPUS + +Pangu Team, Huawei + +PanguTech@huawei.com + +# ABSTRACT + +We present Pangu Ultra, a Large Language Model (LLM) with 135 billion parameters and dense Transformer modules trained on Ascend Neural Processing Units (NPUs). Although the field of LLM has been witnessing unprecedented advances in pushing the scale and capability of LLM in recent years, training such a large-scale model still involves significant optimization and system challenges. To stabilize the training process, we propose depth-scaled sandwich normalization, which effectively eliminates loss spikes during the training process of deep models. We pre-train our model on 13.2 trillion diverse and high-quality tokens and further enhance its reasoning capabilities during post-training. To perform such large-scale training efficiently, we utilize 8,192 Ascend NPUs with a series of system optimizations. Evaluations on multiple diverse benchmarks indicate that Pangu Ultra significantly advances the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse model structure contains much more parameters. Our exploration demonstrates that Ascend NPUs are capable of efficiently and effectively training dense models with more than 100 billion parameters. Our model and system will be available for our commercial customers. + +# 1 Introduction + +Large Language Models (LLMs) have transformed the landscape and our understanding of Artificial Intelligence. Their remarkable capabilities are enabling more and more AI applications, bringing numerous commercial opportunities. Unsurprisingly, teams are racing to push the scaling law to create models with more and more parameters. Although the Transformer [68] structure is a popular choice for large models, it is still debatable whether the models should be sparse or dense. With more than 100 billion parameters, sparse architectures powered by Mixture of Experts (MoE), such as DeepSeek [46, 19], have demonstrated surreal human-like language and thinking abilities [36], which makes sparse models a popular choice when pushing the limit of LLMs. + +At the same time, dense models, such as the Qwen [11, 72], Llama [25], and Gemma [67] series, are currently popular among models with fewer than 100 billion parameters thanks to their strong performance in specific skills and ease of deployment. The parameters in dense models are usually easier to optimize, while the dynamic components in sparse models usually need to turn to additional heuristics for stable training. In addition, the dense model structures at inference time make it easier to optimize system performance due to deterministic parameter usage. In this study, we aim to further explore the potential of dense models at large scales and show the performance of dense models can be on par with state-of-the-art MoE models on diverse tasks. + +The numbers of model parameters and layers are two crucial dimensions to release the full potential of dense models. While model parameter count is critical for model performance and plays a central role in scaling laws [38], recent studies [73, 50] suggest that model depth has a significant impact on reasoning capabilities. However, our exploration in those two aspects poses significant challenges in exploring the limits of those two aspects. Deeper models usually introduce unstable training, manifested as spikes in training loss curves. Experimental observations suggest that those spikes can knock our model out of the ideal parameter landscape and cause irreparable damage to the training process. Meanwhile, training hundreds of billions of parameters in dense models requires orchestrating thousands of AI processors, which poses significant system efficiency challenges. + +For our exploration, we introduce Pangu Ultra, a dense Transformer architecture with 135 billion parameters and 94 layers. The model setup is at the forefront scale of the top performing dense models [11, 72, 25, 67]. Regarding challenges of training deep models, we hypothesize that the loss spikes are due to gradient fluctuations, which in turn hinder convergence rates and may lead to training divergence. Therefore, we propose two techniques, the depth-scaled sandwich norm and tiny initialization, both of which are designed to maintain stable gradient norms. Specifically, we first replace pre-layer norm [47] with the sandwich norm [20] and scaled initialization values in the post-layer normalization based on the model's depth. This depth-based adjustment helps control the range of gradient fluctuations effectively. In addition, we scale the standard deviation of weight initialization according to the model's width and depth, leading to tiny initialization. These two techniques lead to more stable gradients throughout the training process, eliminating loss spikes during the training of Pangu Ultra, and improving overall model performance. + +In practice, we pre-train Pangu Ultra on 13.2 trillion tokens of our built corpus. In the pre-training stage, we use three phrases of data corpus each with a distinct data recipe. The design principles behind three phrases are first to help the model develop knowledge and linguistics, and then to directly equip it with reasoning ability, and finally to boost it on actively learning to reason. The model context window is gradually extended from 4K to 128K. In the post-training stage, we begin with applying efficient supervised fine-tuning (SFT) for a cold start, utilizing a carefully curated set of instruction data. Following this, Pangu Ultra undergoes further optimization through Reinforcement Learning (RL). The overall training of Pangu Ultra is stable in this process. + +To handle large-scale model training of more than 100 billion parameters, we utilize a large-scale computing cluster consisting of 8,192 Ascend NPUs and employ a series of system optimization to improve the system efficiency. The primary challenge is minimizing pipeline bubbles [29] at large scales, which arise due to batch size constraints [35]. We take advantage of the typical 4 types of parallelism on our Ascend cluster, that is, Data Parallelism (DP), Tensor Parallelism (TP) [63], Sequence Parallelsim [39] and Pipeline Parallelism (PP) [30, 51]. As the training cluster scales up, the mini-batch size allocated to each DP decreases, leading to an increased pipeline bubble ratio. To mitigate this issue, we employ additional virtual pipeline (VPP) scheduling [52] with fine-grained tuning to ensure load balancing and reduce the PP bubble ratio from $30.45\%$ to $6.8\%$ . The second challenge is to achieve high training efficiency for long sequences. Both attention mask generation and self-attention computation are time- and memory-intensive, particularly for long contexts. We utilize a NPU Fusion Attention (NFA) operator [4, 18, 17] tailored for the Ascend NPUs, which supports reset attention mask scenarios and eliminates the need to construct the attention mask before calling the NFA, thus improving computational efficiency and reducing memory cost. Under the implementation of several fine-grained system optimization, we achieve a Model FLOPs Utilization (MFU) [14] of over $50\%$ when training Pangu Ultra on 8,192 Ascend NPUs. + +On public evaluation benchmarks, Pangu Ultra outperforms existing dense LLMs including Llama 405B and Mistral Large 2 123B on almost all major language tasks, and achieves competitive results with sparse models consisting of more than 500 billion parameters. These results indicate the potential of dense model capabilities is still promising to explore. Pangu Ultra also demonstrates that the Ascend NPUs are suitable for exploring the full capabilities of large-scale dense language models. + +# 2 Model Architecture + +The basic architecture of Pangu Ultra is similar to Llama 3 [25]. It has 135 billion parameters with a hidden dimension of 12,288, a SwiGLU [60] feed-forward network (FFN) intermediate size of 28,672, and 94 layers. The attention blocks in Pangu Ultra leverage Group Query Attention (GQA) to reduce KV-cache size by incorporating 96 query heads and 8 KV heads. + +There are two crucial differences to address the fundamental challenges of training stability and convergence in large dense LLMs. We propose Depth-Scaled Sandwich-Norm to replace the layer normalization and TinyInit for parameter initialization. By integrating these techniques, Pangu Ultra achieves substantial improvements over previous dense models. + +# 2.1 Depth-Scaled Sandwich-Norm + +Large-scale dense models typically adopt deeper architectures [22], although MoE models usually scale in width [19]. However, increased depth introduces greater challenges in maintaining training stability. Given the prohibitive cost of pre-training, stable training of large dense LLMs becomes paramount. Pre-Layer + +Normalization (Pre-LN) has been found to make back-propagation more efficient for deep Transformers [69], leading to its widespread adoption in Transformer-based large language model (LLM) architectures [22, 11, 19]. + +However, in models employing the pre-LN structure, the fluctuating output scale of each sub-layer can easily lead to training instability [66]. To address this issue, sandwich-norm [20] applies an layer normalization to each sub-layer's output prior to the residual connection. While the sandwich-norm maintains the scale stability of individual sub-layer outputs, the progressive accumulation of output norms via residual connections across multiple layers may nevertheless lead to training instability. + +To mitigate this, we present the depth-scaled sandwich norm, which integrates the sandwich norm with a depth-scaled initialization scheme. The layer normalization regulates layer-wise output magnitudes through trainable gamma parameters, which are initialized with values scaled proportionally to the inverse of network depth. Figure 1 illustrates the differences between the depth-scaled sandwich-norm and pre-norm architectures. The formula of depth-scaled sandwich-norm is + +$$ +\mathbf {h} \leftarrow \mathbf {h} + \operatorname {N o r m} \left(\gamma_ {\text {a t t n}}, \operatorname {A T T N} (\operatorname {N o r m} (\mathbf {h}))\right), \quad \gamma_ {\text {a t t n}} = \frac {c _ {\text {a t t n}}}{\sqrt {L}}, \tag {1} +$$ + +$$ +\mathbf {h} \leftarrow \mathbf {h} + \operatorname {N o r m} \left(\gamma_ {\mathrm {m l p}}, \operatorname {M L P} (\operatorname {N o r m} (\mathbf {h}))\right), \quad \gamma_ {\mathrm {m l p}} = \frac {c _ {\mathrm {m l p}}}{\sqrt {L}}, +$$ + +where $L$ is the number of layers, $c_{\mathrm{attn}}$ and $c_{\mathrm{mlp}}$ are set as the initial output standard deviations of the attention layer and feed-forward network (FFN) layer, respectively. For Pangu Ultra, we set $c_{\mathrm{attn}}$ to 0.283 and $c_{\mathrm{mlp}}$ to 0.432. + +![](images/4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg) +Figure 1: Structure comparison between Pre-Layer Norm (Pre-LN) and Depth-Scaled Sandwich-Norm (DSSN). DSSN applies normalization layers to both before and after the attention and FFN block, while Pre-LN only utilizes one normalization layer. DSSN also employs a depth-scaled initialization schema, which is not in the original sandwich norm. + +![](images/8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg) + +# 2.2 Model Initialization + +Existing works [53] observe that model initialization plays a crucial role in training stability and performance. Transformer-based LLMs widely adopt small initialization[53], which initialize all the weight with a normal distribution of standard deviation $\sqrt{\frac{2}{5d}}$ , where $d$ is the hidden dimension. It's also common practice to scale the weights of residual layers at initialization by a factor of $1 / \sqrt{L}$ [57], where $L$ is the number of layers. + +Our findings suggest that scaling initialization by both model depth and width, using $\sqrt{\frac{1}{2dL}}$ , leads to faster loss convergence and improved performance on downstream tasks. We call this initialization method TinyInit. We hypothesize that TinyInit achieves more consistent parameter scales across the model, which may facilitate optimization and convergence. + +Research [66] indicates that embedding layers require different initialization strategies compared to other layers. Specifically, maintaining the standard deviation of embedding weights close to 1 may enhance training + +stability. Our experimental results indicate that initializing with a standard deviation of 0.5 achieves good model performance. + +# 2.3 Tokenizer + +The design of the tokenizer significantly impacts model performance. An optimal vocabulary balances domain coverage (handling diverse tasks such as text, math, and code) with efficiency (encoding data with fewer tokens). Common methods use Byte-Pair Encoding (BPE) [62] and SentencePiece [40] build vocabularies by directly computing word frequencies across the entire training dataset. However, this approach suffers from domain imbalance, as common domains such as general text dominate the vocabulary, while specialized domains such as math and code remain underrepresented due to their limited data volume. + +Pangu Ultra adopts a domain-aware vocabulary strategy. We perform independent frequency analyses across multiple domains including general Chinese, general English, code, and mathematics, generating distinct domain-specific vocabularies. These vocabularies are then merged and de-duplicated to form a unified vocabulary of 153,376 unique tokens, maintaining balanced representation across domains while preserving overall compression efficiency. Table 1 summarizes the detailed token distribution across different domains. + +Table 1: Token distribution in the unified vocabulary of Pangu Ultra. + +
DomainNumber of TokensPercentage (%)
English68,01744.35
Chinese41,05326.77
Other30,57319.93
Latin-based languages4,5072.94
Arabic2,7551.80
Korean2,7331.78
Mathematics2,1391.39
Japanese1,5991.04
Total153,376100.00
+ +# 3 Model Training + +In this section, we present our training pipeline, which is similar to training state-of-the-art language models, e.g., DeepSeek-V3 [19] and Llama 3 [22]. The training process consists of three main stages: pre-training, long context extension, and post-training. Each stage has specific training strategies and data construction methods to gradually enhance the model capabilities. + +# 3.1 Pre-training Stage + +We first introduce the data construction in the pre-training of Pangu Ultra, followed by the details of data verification. Then we elaborate the practical approach for the long context extension. The detailed pre-training hyper-parameters are finally presented. + +# 3.1.1 Data Construction + +The pre-training corpus of Pangu Ultra contains high-quality and diverse 13.2T tokens produced by our tokenizer, as stated in Section 2.3. Table 2 shows the pre-training process is structured into three sequential phases: the general phase, the reasoning phase, and the annealing phase. These phases are designed to progressively develop general knowledge and linguistic capabilities, enhance reasoning skills, and further refine knowledge and behavior, respectively. The amount of data used in each phase is 12T, including 7.4T and 4.6T data in two distinct subphases, 0.8T, and 0.4T tokens. + +In the initial general training phase, we utilize a corpus focused on developing broad linguistic capabilities and general knowledge. This stage primarily consists of English and Chinese data collected from a diverse range of sources, including web pages, books, encyclopedias, etc. Data from the multilingual and various industrial domains is also incorporated. Based on our data quality assessment in Section 3.1.2, we perfer to use higher-quality data in the second sub-phrase than the first. + +Table 2: Data recipe of Pangu Ultra pre-training. + +
DatasetGeneralReasoningAnnealing
General English54%14%21%
General Chinese13%6%20%
Multi-lingual8%4%3%
Instruction2%11%20%
Math6%28%18%
Code17%37%18%
+ +In the second reasoning phase, we increase the proportion of high-quality and diverse mathematical and coding data—raising it to over $60\%$ of the corpus to enhance the reasoning capabilities of Pangu Ultra. The coding data includes both pure code and mixed text-code samples. The math data also involves a lot of English and Chinese texts. Moreover, LLM-generated synthetic data is widely incorporated to enrich the corpus. + +The third annealing phrase is designed to help the model consolidate and effectively apply the knowledge and reasoning skills acquired in the previous stages. Therefore, we place greater emphasis on instruction data, which accounts for approximately $20\%$ of the corpus. We curate in-house question banks covering a wide range of topics and construct both short and long chain-of-thought (CoT) responses. These reasoning paths are carefully refined to ensure clarity and logical coherence. + +Overall, the pre-training data for Pangu Ultra is carefully designed to ensure high quality, diversity, and minimal redundancy. We assign quality and difficulty labels to the data and adopt a curriculum-based sampling strategy for the reasoning data across all three phases—progressing from simpler examples to more complex ones throughout the training cycle. + +# 3.1.2 Data Quality Assessment + +Data quality assessment plays a crucial role in enhancing the overall quality of the data. Training Pangu Ultra employs both rule-based heuristics and model-based evaluation to enhance data quality. + +For model-based quality assessment, we leverage the Pangu series as the base model. To better align quality evaluation with human value judgments, we fine-tune the model using a manually annotated dataset. The fine-tuned evaluator is then applied to a large-scale pre-training corpus exceeding 10T tokens. Data samples are scored across multiple dimensions, including cleanliness, fluency, educational value, and richness. These annotated scores are then used in a prioritized sampling strategy, where higher-quality samples are assigned higher sampling probabilities. + +To validate the effectiveness of our data quality assessment, we conducted an ablation study using a proxy model with 2.6 billion parameters. Empirical results show that, to achieve comparable performance, the model trained on low-scoring data required $1.6 \times$ more tokens than the one trained on high-quality high-scoring data. Therefore, high data quality is important for improving training efficiency. + +# 3.1.3 Pre-training Parameters + +Pangu Ultra is trained using AdamW optimizer [48] with a weight decay of 0.1 and epsilon is set to $1 \times 10^{-8}$ . The momentum parameters are set to $\beta_{1} = 0.9$ and $\beta_{2} = 0.95$ . The gradient clipping norm is set to 1.0. To improve the training stability and overall performance, the pre-training of Pangu Ultra is organized into the following phases: + +0T-7.4T tokens The sequence length is set to 4K (RoPE base $= 1 \times 10^{4}$ ). The batch size increases from 1,024 to 1,536 (at 1.2T) and 2,048 (at 1.9T). The increased batch size improves training efficiency and throughput. The learning rate follows a cosine decay from $1 \times 10^{-4}$ to $1 \times 10^{-5}$ with 4,000 warmup steps to ensure stable early training. + +7.4T-12.0T tokens The sequence length remains at 4K with a batch size of 2,048. The learning rate is fixed at $1 \times 10^{-5}$ in this phase. + +12.0T-12.8T tokens The sequence length increases to 8K (RoPE base $= 1 \times 10^{5}$ ). The batch size is reduced to 1,536. The learning rate decays from $1 \times 10^{-5}$ to $7.5 \times 10^{-6}$ using cosine scheduling. + +# 3.2 Long Context Extension + +The ability of LLMs to understand long context inputs is critical in long-thinking process and practical applications. In the final stages of pre-training, Pangu Ultra is trained on long sequence data to support a maximum context length of 128K. The training consists of two progressive phases: the first phase expands the context length to 32K, and the second phase further expands it to 128K. + +Rotary Position Embedding (RoPE) [64] is the core module for supporting ultra-long input sequences. Existing open-source LLMs typically extend context length by either increasing the base frequency in RoPE [64, 32] or by adopting methods such as YaRN [55, 22, 19]. Our findings show that both methods perform similarly well if the hyper-parameters are correctly chosen, and we adopt the increased base frequency method in Pangu Ultra. To determine the base frequency in RoPE for long-context extension, we evaluate the offline performance of "Needle In A Haystack" (NIAH) with different base frequencies at the target sequence length, and select the one with the best result. This ensures a relatively low initial loss in long-context training. In practice, the selected base frequency for $32\mathrm{K}$ is $1.6\times 10^{6}$ , and for $128\mathrm{K}$ is $2.56\times 10^{7}$ . Detailed hyper-parameters of Pangu Ultra long context training are summarized below: + +8K to 32K phase The sequence length is expanded to 32K (RoPE base $= 1.6 \times 10^{6}$ ). The batch size is 384 with a learning rate of $7.5 \times 10^{-6}$ , matching the final learning rate from the previous post-training stage. + +32K to 128K phase The sequence length is further expanded to $128\mathrm{K}$ (RoPE base $= 2.56 \times 10^{7}$ ). The batch size is reduced to 96. The learning rate remains $7.5 \times 10^{-6}$ . + +# 3.3 Post-training Alignment + +In the post-training stage, Pangu Ultra is aligned with human preferences through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This stage focuses on constructing high-quality, diverse instruction data and designing scalable, efficient training strategies. + +# 3.3.1 Post-training Data + +In constructing post-training data, we emphasize the data quality, diversity, and complexity. The data pool is curated from a wide range of domains and task types, including general question answering, AI-generated content (AIGC), text classification and analysis, programming, mathematics, logical reasoning, and tool usage. These tasks cover application areas such as finance, healthcare, and public services. Data sources span open-source instruction datasets, real-world industrial queries, and synthetic problems derived from the pre-training corpus. + +To promote data diversity, data samples are selected along two orthogonal dimensions, guided by the entropy law [74]: domain and task type. Hierarchical tagging models with varying levels of granularity are used to support balanced data sampling. Data quality is managed through a combination of rule-based validation and model-based validation, which helps eliminate low-quality or ambiguous samples. + +To better stimulate the reasoning capabilities of Pangu Ultra, a large portion of the post-training data, approximately six-sevenths, consists of reasoning tasks such as mathematics, coding, and logic. The post-training data covers a range of complexities, with a focus on moderately to highly challenging tasks. + +# 3.3.2 Post-training Strategy + +In the post-training stage, Pangu Ultra was first trained with SFT to establish preliminary instruction-following capabilities. Following SFT, we apply RL with outcome-based reward signals to further enhance reasoning, alignment, and instruction-following abilities of Pangu Ultra. + +We implement a latency-tolerant reinforcement learning framework optimized for the Ascend infrastructure, which will be detailed in a future report. The framework enables efficient large-scale policy optimization on Ascend. To guide the RL process, we implement a hybrid reward system that provides task-specific feedback for mathematics, coding, and general problem-solving. This hybrid reward system combines deterministic reward signals and model-based evaluations to facilitate stable and efficient policy optimization. + +# 4 Training System + +Training our Pangu Ultra with 135B parameters on 13.2 trillion tokens necessitates the need to ensure training stability and efficiency in large-scale computing cluster. In this section, we elaborate the details of our training system from two important perspectives: parallelization strategies and system-level optimization techniques, in Section 4.2 and Section 4.3. Overall, we achieve over $52\%$ Model FLOPs Utilization (MFU) when training Pangu Ultra on 8,192 Ascend NPUs. + +# 4.1 Computing Setup + +A computing cluster with 8,192 Ascend Neural Processing Units (NPUs) [5, 6] is deployed to train Pangu Ultra. Each node in the cluster houses 8 NPUs, interconnected via Huawei Cache Coherence System (HCCS) using a full-mesh topology, and each device is equipped with 64GB Memory. Inter-node communication is facilitated through RDMA over Converged Ethernet (RoCE) fabric, leveraging 200 Gbps interconnects for communication between NPUs across different nodes. + +# 4.2 Parallelism Strategies for Model Scaling + +In order to scale model training $^1$ , we leverage a combination of different parallelism strategies to distributes the model across multiple NPUs, including Data Parallelism (DP) [43], Tensor Parallelism (TP) [63], Sequence Parallelism (SP) [39], and Pipeline Parallelism (PP) [30, 51]. For Pangu Ultra, 128-way DP with ZERO [58] is performed to reduce the memory cost of model parameters and the associated optimizer states. 8-way TP is applied to leverage the high intra-node bandwidth for efficient activation transfer, while 8-way PP is adopted to utilize inter-node connections, since it only requires transmitting activations at the partition boundaries. However, as mentioned in existing studies [35, 30, 51, 56], pipeline parallelism encounters severe PP bubbles when the training cluster scales up, primarily due to batch size constraints [35]. For one-forward-one-backward (1F1B) PP scheduling, the bubble ratio is defined as $\frac{p - 1}{p - 1 + n}$ , where $p$ represents the number of pipeline stages and $n$ denotes the number of micro batches for every DP. The ratio represents the idle time of accelerators, as shown in Figure 2. A large-scale training cluster increases the number of DPs, which in turn reduces the number of micro batches assigned to each DP due to batch size constraints, leading to a significant increase in the bubble ratio. Therefore, minimizing bubble ratio is crucial for improving system efficiency. Under such circumstances, we employ interleaved pipeline-parallel scheduling with 6-way virtual PP stages on each device [52] and manage to reduce it from $30.45\%$ to $6.8\%$ . Through careful tuning of load balancing across PP and VPP stages, we are able to achieve approximately $43\%$ MFU on an 8,192 NPU cluster as a baseline. + +![](images/3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg) +Figure 2: Pipeline parallelism and the interleaved pipeline-parallel scheduling. + +# 4.3 System Optimization + +Based on the optimizations outlined in Section 4.2 that achieved $43\%$ MFU, additional system-level enhancements are implemented to push training efficiency to new heights. Through a combination of kernel fusions, context parallelism via subsequence partitioning, data caching and sharing mechanisms, and other refinements, Pangu Ultra benefits from a significant improvement in training efficiency. These comprehensive optimizations enable the system to achieve over $52\%$ MFU, representing a $9\%$ relative improvement compared to the baseline configuration mentioned in Section 4.2. + +![](images/67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg) +(b) The MC2 implementation +Figure 3: A Comparison of the default transformer computation and the MC2 method. Note that in actual training, communication and computation tasks are fused into a single kernel in MC2. + +# 4.3.1 Kernel Fusion + +Kernel fusion is widely adopted in LLM training to enhance efficiency. It combines multiple operations into a single kernel, reducing the number of data accesses to global memory [17]. During the training phase of Pangu Ultra, key operators are fused, resulting in significant improvements in hardware utilization and overall training efficiency. + +MC2 - Merged Compute and Communication Tensor parallelism, when combined with sequence parallelism, introduces All-Gather (AG) and Reduce-Scatter (RS) communication operations for exchanging input and output activations across distributed devices. This approach exhibits a direct dependency between matrix multiplication (MatMul) and AG/RS communications, which fundamentally constrains the overlapping of TP communication with computational workflows. The MC2 is implemented [2, 3] to tackle this challenge by fusing MatMul computations with communication operations. It decomposes large computation and communication tasks into fine-grained subtasks and employs pipelined execution to maximize overlap between communication and computation. Thus, MC2 significantly reduces communication latency and improves hardware utilization (Figure 3). + +NPU Fusion Attention Training LLMs with long sequence length suffers from quadratic memory and computational requirements in self-attention mechanisms as sequence length grows. To address these challenges, Flash Attention (FA) has emerged as a standard technique in LLM training owing to its superior performance [18, 17]. Pangu Ultra leverages a self-attention fusion operator, called NPU Fusion Attention (NFA)[9], which is specifically optimized for Ascend NPUs, offering system-level improvements across a wide range of self-attention computation scenarios. + +![](images/1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg) +Figure 4: Examples of attention mask compression for the NFA operator. + +It is worth mentioning that Pangu Ultra uses a reset attention mask strategy to prevent self-attention between different documents within a sequence. This requires calculating the corresponding attention mask for every sequence, leading to significant memory and computational overhead. To mitigate the time and memory requirements of generating attention masks, the NFA operator employs a mask compression optimization. As shown in Figure 4, NFA utilizes a $2048 \times 2048$ causal mask as a template to construct the computational mask within the fusion attention operator. For every iteration, Pangu Ultra retrieves the actual sequence length based on the position of the end-of-document (eod) token, which is then provided as input to the NFA operator to accelerate the computation of self-attention. The detailed usage of NFA is provided in the Ascend documentation [9]. + +Other Kernel Fusions for Efficiency In addition to MC2 and NPU-optimized fused attention, we also integrate a series of kernel fusion optimizations within key components such as RMSNorm [77], SwiGLU [60], and rotary positional embeddings (RoPE) [64], as well as critical processes including gradient accumulation and PP send/receive communications. These fusion operators are designed to reduce kernel launch and memory access overheads, while maintaining high numerical precision and enhancing overall training performance. + +![](images/16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg) +(a) Original + +![](images/44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg) +Causal Masking +(b) Megatron + +![](images/ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg) +Reset of Attention Mask +(c) Megatron + +![](images/8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg) +(d) Ours +Figure 5: Examples of the mechanism of sub-sequence partitioning for context parallelism. + +# 4.3.2 Optimization for Long Context Training + +Scaling long-context capabilities is becoming increasingly important for applications such as long document summarization and conversational AI. However, training on long sequences presents several challenges in terms of both time and memory complexity. To improve the efficiency of long-context training, we propose two key strategies, as outlined below. + +Sub-Sequence Partitioning for Context Parallelism Context parallelism (CP) is an crucial approach for the training of very long sequences, that divides the input sequence into segments to reduce memory consumption [44, 33]. Yet, with causal masking, simply splitting the sequence into $CP$ chunks results in a severely imbalanced workload for Ring Self-Attention (RSA) [44] (as shown in Figure 5(a)). Megatron-LM addresses this issue by splitting the sequence into $2 \times CP$ chunks, where each rank receives chunks from both the top and bottom, thus balancing the workload within a CP group (Figure 5(b)) [7]. However, this method still results in an imbalanced workload when the attention mask is reset (Figure 5(c)). Therefore, in training with 128k-long contexts, we propose a load-balanced partitioning strategy for CP training, where each rank is responsible for computing two chunks within each subsequence (Figure 5(d)). + +Fast Mask Generation and Data Reuse When scaling the training sequence of Pangu Ultra up to 128k, the generation of the attention mask or the calculation of the actual sequence length still incurs a non-negligible performance overhead. Additionally, in the training scenario with reset attention masks, each VPP stage is required to retrieve the corresponding mask or actual sequence length in every iteration, resulting in redundant computations and increased overhead. We optimize these problems by (1) using efficient NPU operators to compute the attention mask, instead of constructing it on the CPU, thus accelerating mask generation and eliminating the need for data transfer between the CPU and NPU, and (2) enabling cross-VPP stage mask sharing, where attention masks are generated by the first stage (VPP0) and shared across different VPP stages on the same rank, thereby avoiding redundant mask computations and memory cost. + +# 5 Results + +In this section, we discuss the evaluation results of Pangu Ultra, including pre-training performance and posttraining outcomes. In addition, we provide comprehensive ablation studies that exam the model architecture and further discuss the observations of training Pangu Ultra. + +# 5.1 Pre-Training Training Loss Curve + +Figure 6 shows the training loss curve of Pangu Ultra during the entire pre-training. Each segment in the loss curve corresponds to one training stage, as described in Section 3.1.3. The loss curves demonstrate consistent descending trends across all training stages. For the second interval, although the descent rate moderated due to a constant learning rate, the performance metrics continued to show steady improvement throughout this interval. + +![](images/4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg) +Figure 6: The training loss curve of Pangu Ultra during the pre-training stage. + +Zero loss spike As shown in Figure 6, no loss spikes occur throughout the entire pre-training process. While such spikes are common in LLM training [66], the absence of them here underscores the importance of our depth-scaled sandwich norm and TinyInit in ensuring stable training. The negative effect of loss spike to the model performance will be further elaborated in Section 5.4.1. + +# 5.2 Pre-Training Stage + +Benchmarks We evaluate Pangu Ultra base model across multiple domains using open-source benchmarks, including language understanding, question answering, code generation, and math problem solving. The evaluation mainly uses English and Chinese test sets, with some additional multilingual benchmarks for broader coverage. + +- Language understanding: We employ Hellaswag [76] and Winogrande for contextual reasoning tasks, DROP [21], RACE [42], and ARC [15] series for comprehensive reading comprehension evaluation, along with PIQA [12], Natural Questions [41] and TriviaQA [37] to assess knowledge retrieval. +- Question answering: The assessment includes C-Eval [31] for Chinese knowledge, MMLU [27] and its advanced variant MMLU-Pro [70] for English domain knowledge, supplemented by BigBenchHard [65] to evaluate creative problem-solving +- Code generation and understanding: We utilize HumanEval [13] and MBPP [10] for standard code generation tasks, while CruxEval [26] for code understanding and reasoning. + +- Mathematical Reasoning: We measure skills with $CMath$ [71] and $GSM8K$ [16] for fundamental arithmetic and simple problems, $MATH$ [28] for advanced mathematical reasoning, and $MGSM$ [61] for multilingual math problem solving. + +Baselines & Comparison Settings We compare Pangu Ultra against several strong baselines covers both dense models (Qwen2.5-72B, Llama-405B) and MoE architectures (DeepSeek-V3). For base models, the majority of our evaluations employ few-shot inputs, with a minority using zero-shot prompts. We evaluate most benchmarks with gold answers through exact matching, while employing execution-based verification for code generation tasks. + +Evaluation Results In Table 3, we compare the pre-trained base model of Pangu Ultra with other leading models. Overall, Pangu Ultra achieves state-of-the-art performance on most general English benchmarks and all Chinese benchmarks. While it trails DeepSeek V3 on code and math-related tasks, it performs competitively on these domains. + +A closer examination reveals that Pangu Ultra excels on Chinese benchmarks, surpassing both Qwen 2.5 72B and DeepSeek V3, the current best-performing Chinese model. In addition, when compared to Llama 3.1 405B, Pangu Ultra achieves better scores on most of the challenging benchmarks, while utilizing only about $29\%$ of the training FLOPs required by Llama 405B. These results suggest the effectiveness of our model architecture and the high quality of our training data. + +Table 3: Comparison of Pangu Ultra and other representative models across a diverse set of benchmarks for evaluating language, coding and mathematical skills. Bold values represent the best results in each line, and underlined values represent Pangu Ultra is the best among dense models. + +
Benchmark (Metric)# ShotsQwen2.5 72B BaseLlama-3.1 405B BaseDeepSeek V3 BasePangu Ultra Base
Architecture-DenseDenseMoEDense
# Activated Params-72B405B37B135B
# Total Params-72B405B671B135B
EnglishBBH (EM)3-shot79.882.987.579.1
MMLU (EM)5-shot85.084.487.185.4
MMLU-Pro (EM)5-shot58.352.864.463.1
DROP (F1)3-shot80.686.089.061.0
ARC-Easy (EM)25-shot98.498.498.9100.0
ARC-Challenge (EM)25-shot94.595.395.397.0
HellaSwag (EM)10-shot84.889.288.999.0
PIQA (EM)0-shot82.685.984.798.0
WinoGrande (EM)5-shot82.385.284.991.0
RACE-Middle (EM)5-shot68.174.267.197.0
RACE-High (EM)5-shot50.356.851.397.0
TriviaQA (EM)5-shot71.982.782.990.5
NaturalQuestions (EM)5-shot33.241.540.052.7
AGIEval (EM)0-shot75.860.679.680.4
CodeHumanEval (Pass@1)0-shot53.054.965.281.1
MBPP (Pass@1)3-shot72.668.475.472
CRUXEval-I (EM)2-shot59.158.567.361.8
CRUXEval-O (EM)2-shot59.959.969.861.5
MathGSM8K (EM)8-shot88.383.589.389.3
MATH (EM)4-shot54.449.061.662.5
MGSM (EM)8-shot76.269.979.875.1
CMath (EM)3-shot84.577.390.778.2
ChineseCLUEWSC (EM)5-shot82.583.082.795.0
C-Eval (EM)5-shot89.272.590.190.3
CMMLU (EM)5-shot89.573.788.891.7
CMRC (EM)1-shot75.876.076.386.0
C3 (EM)0-shot76.779.778.699.0
CCPM (EM)0-shot88.578.692.093.0
+ +# 5.3 Post-Training and Reasoning Capability + +Benchmarks We conduct a comprehensive evaluation of the Pangu Ultra's capabilities over reasoning and non-reasoning tasks: + +- Sophisticated reasoning tasks encompass three specialized subcategories: mathematical competence measured by AIME 2024 [49] and MATH-500, Coding competition benchmarks LiveCodeBench [34] and scientific reasoning task GPQA Diamond [59]; +- General language comprehension and reasoning capabilities, represented by MMLU-Pro [24], Arena Hard [45]. + +Baselines & Comparison Settings We compare Pangu Ultra against strong baselines including GPT-4o0513, reasoning models DeepSeek-R1, Hunyuan-T1 and large dense models, Qwen2.5-72B-Instruct and Mistral-Large 2. We use Pass@1 averaged over multiple independent runs as the evaluation metric to assess the performance. + +Evaluation Results In Table 4, we compare the evaluation results of Pangu Ultra with other baseline models. Pangu Ultra achieves state-of-the-art performance on the reasoning benchmarks including AIME 2024, MATH-500, GPQA and LiveCodeBench, while maintaining strong capabilities in general language comprehension tasks. + +When compared to dense LLMs (Qwen and Mistral-Large 2), Pangu Ultra shows particularly significant advantages in reasoning tasks. This superior performance stems from the 0.8T reasoning-focused data used in pre-training (Section 3.1.3). The reasoning-enhanced base model substantially benefits subsequent post-training phases. + +Table 4: Comparison of Pangu Ultra models and other representative models across benchmarks. $\dagger$ indicates results from Artificial Analysis [1]. + +
ModelAIME 2024MATH-500GPQA DiamondLiveCode BenchArenaHardMMLU-pro
GPT-4o-05139.374.649.932.980.472.6
Qwen2.5-72B16.083.14927.681.272.0
Mistral-Large 2†11.073.648.629.3-69.7
Hunyuan-T179.896.269.364.991.987.2
DeepSeek-R179.897.371.565.992.384.0
Pangu Ultra80.897.474.266.591.584.4
+ +# 5.4 Ablation Studies + +This section presents additional ablation studies of the model architecture and analyzes key training behaviors to facilitate a deeper understanding and discussion of dense LLM training. + +# 5.4.1 Depth-scaled Sandwich-norm + +We conducted experiments to validate the effectiveness of depth-scaled sandwich norm compared to pre-norm architectures. Using a dense Transformer model with 13 billion parameters trained on 300 billion tokens with identical hyperparameters for both configurations, we observe significant improvements. + +Figure 7 shows the depth-scaled sandwich-norm architecture stabilizes gradient norms and effectively eliminates loss spikes, leading to faster training convergence. We evaluated performance on two composite benchmarks: EN basic, consisting of multiple English benchmarks, and ZH basic, representing Chinese benchmarks. Additional evaluation using LAMBADA [54] (English) and WPLC [23] (Chinese) next-token prediction tasks confirmed the advantage of applying depth-scaled sandwich-norm. The results clearly suggest that preventing loss spikes during pre-training is crucial for optimal model performance. + +To further ablate the effect of our depth-scaled factor in RMSNorm initialization, we compare with the plain sandwich-norm that does not have the $\sqrt{L}$ scaling factor in Eq. (1). Here, we use a proxy model containing 1.6 + +![](images/a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg) +(a) Loss + +![](images/f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg) +(b) Gradient norm +Figure 7: Pre-training loss and gradient norm for a 13B model using Pre-LN and Depth-Scaled Sandwich-Norm (DSSN). The curves with Pre-LN has significant spikes, which harm the trained model, while the curves of DSSN are much smoother. + +Table 5: Performance comparison between Pre-LN and Depth-scaled Sandwich-Norm. + +
ModelTokens (B)EN basicZH basicLAMBADAWPLC
Pre-LN3000.420.520.6750.194
Depth-scaled sandwich-norm3000.450.540.6930.224
+ +billion parameters and 94 layers, which has the same depth with Pangu Ultra. By using this proxy model, we examine the effectiveness of sandwich-norm on training very deep Transformers. In Figure 8, we can observe some loss spikes with the plain sandwich-norm, but our depth-scaled sandwich-norm can be trained smoothly, and attains lower loss. + +![](images/9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg) +Figure 8: Pre-training loss for a 94-layer 1.6B model using original and depth-scaled sandwich-norm. The original sandwich-norm still suffers loss spikes during training. + +# 5.4.2 Tiny Initialization + +We conduct experiments to study the effectiveness of TinyInit proposed in Section 2.2. After being trained on 102 billion tokens, Pangu Ultra initialized with TinyInit strategy, with standard deviation $\sqrt{\frac{1}{2dL}}$ , performs significantly better than the baseline model that utilizes traditional initialization, whose standard deviations are $\sqrt{\frac{2}{5d}}$ and $\sqrt{\frac{2}{5dL}}$ . The results are shown in Table 6. BIG-bench (aug) is a test set developed internally through data augmentation of the original BIG-bench, designed to mitigate the impact of test set leakage. + +Table 6: Performance comparison of traditional initialization and TinyInit. + +
ModelTokens (B)EN basicZH basicLAMBADAWPLCC-EvalMMLUBIG-bench (aug)
Baseline1020.4440.5380.6940.2290.4760.4730.357
TinyInit1020.4560.5370.7270.2570.5240.5020.384
+ +# 5.4.3 Layer Statistics of Pangu Ultra + +Stable activation scale Figure 9 presents the activation patterns of attention and FFN modules across Transformer layers, showing the mean, standard deviation, and top-1 activation values. The activation distributions demonstrate stability, with standard deviations maintaining consistent scales throughout pretraining while preserving a clear layer-wise pattern. Our analysis reveals the presence of "super activations", whose magnitude reaches $10^{3}$ magnitude in shallow layers, a phenomenon consistent with findings in the Llama model [75]. Notably, Figure 9 illustrates that these top-1 activation values progressively decrease with layer depth, indicating that their influence becomes relatively small on the final output. + +![](images/744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg) + +![](images/d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg) + +![](images/e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg) + +![](images/dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg) + +![](images/5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg) +(a) Down projection + +![](images/c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg) +(b) Up & Gate projection + +![](images/bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg) +Figure 9: Activation of attention and FFN modules. Mean, standard deviation, and top-1 value of activations are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. The "Std" row shows the stable activation scale across layers. The "Top 1" row shows the existence of the "super activations" in down projection and attention output projection, with magnitudes falling within a reasonable range and comparable to those observed in the LLaMA model [75]. + +![](images/a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg) +(c) Attention output projection +(d) Attention QKV projection + +Layer-wise patterns of depth-scaled sandwich norm. Figure 10 presents the distribution of scaling parameters $\gamma$ across all sandwich-norm layers, revealing several key observations: All four LayerNorm $\gamma$ parameters exhibit decreasing mean/standard deviation during training, consistent with weight decay effects. Post-norm $\gamma$ values show layer-dependent patterns: The standard deviation of post-norm $\gamma$ increases substantially with layer depth. Pre-norm $\gamma$ maintains relatively constant standard deviation across layers. This pattern suggests an intriguing model behavior: shallow layers rely primarily on residual connections, while deeper layers progressively emphasize transformer layer outputs as the scaling factor $\gamma$ grows in magnitude. + +# 6 Conclusion + +We present Pangu Ultra, a dense language foundation model with 135 billion parameters trained on Ascend NPUs. To address challenges in training large-scale deep models, we propose depth-scaled sandwich-norm, enabling Pangu Ultra to achieve remarkable training stability without significant loss spikes. After being pre-trained on 13.2 trillion tokens and long context extension on 8,192 Ascend NPUs, our model further + +![](images/13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg) + +![](images/b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg) + +![](images/36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg) + +![](images/03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg) + +![](images/54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg) +(a) Post-norm after attention + +![](images/fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg) +(b) Post-norm after FFN + +![](images/cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg) +Figure 10: Distribution of sandwich-norm's $\gamma$ parameter. Mean and standard deviation are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. There is a clear layer-wise pattern of the two post-norms: the mean and std value of $\gamma$ increase with depth. Larger post-norm $\gamma$ indicates deeper layers emphasize more on transformer outputs instead of residual connections. + +![](images/a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg) +(c) Post-norm before attention +(d) Post-norm before FFN + +enhances its reasoning capabilities through Supervised Fine-Tuning and Reinforcement Learning. Extensive experiments lead to the observation that Pangu Ultra not only surpasses state-of-the-art dense LLMs like Llama 405B and Mistral Large 2 but also delivers competitive performance against larger sparse models such as DeepSeek-R1. These results highlight the efficacy of our architectural and systemic optimizations, paving the way for future advancements in scalable and efficient LLM training. In addition, our experience demonstrates that the Ascend NPUs are capable of training dense models with hundreds of billions of parameters. + +# References + +[1] Artificial analysis. https://artificialanalysis.ai/. +[2] Ascend mc2. https://citee.com/qingfenxiaochong/MindSpeed/blob/master/docs/features/mc2.md. +[3] Ascend mc2. https://www.hiasmend.com/developer/techArticles/20240613-1. +[4] Flash attention. https://github.com/Dao-AILab/flash-attention. +[5] Huawei atlas 800t a2. https://e.huawei.com/cn/products/computing/ascend/ atlas-800t-a2. +[6] Huawei atlas 800t a2 technical specifications. https://support.huawei.com/enterprise/en/doc/EDOC1100349804/2bf2c017/technical-specifications?idPath=23710424|251366513|22892968|252309113|254184887. +[7] Megatron-lm. https://github.com/NVIDIA/Megatron-LM. +[8] Mindspeed. https://citee.com/ascend/MindSpeed. +[9] Npu fusion attention. https://www.hiasmend.com/document/detail/zh/Pytorch/60RC1/apiref/apilist/ptaoplist_000139.html. +[10] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. ArXiv, abs/2108.07732, 2021. +[11] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023. + +[12] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence, 2019. +[13] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021. +[14] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. +[15] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018. +[16] Karl Cobbe, Vineet Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021. +[17] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Representations, 2024. +[18] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022. +[19] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y. K. Li, Y. Q. Wang, Y. X. Wei, Y. X. Zhu, Yang Zhang, Yanhong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng + +Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025. +[20] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 19822-19835. Curran Associates, Inc., 2021. +[21] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics, 2019. +[22] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. +[23] Huibin Ge, Chenxi Sun, Deyi Xiong, and Qun Liu. Chinese wplc: A chinese dataset for evaluating pretrained language models on word prediction given long-range context. In Conference on Empirical Methods in Natural Language Processing, 2021. +[24] Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, et al. Are we done with mmlu? arXiv preprint arXiv:2406.04127, 2024. +[25] Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. +[26] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida Wang. Cruxeval: A benchmark for code reasoning, understanding and execution. ArXiv, abs/2401.03065, 2024. +[27] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020. +[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. *ArXiv*, abs/2103.03874, 2021. +[29] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019. +[30] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 103-112, 2019. +[31] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Fanchao Qi, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. ArXiv, abs/2305.08322, 2023. +[32] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024. +[33] Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, and Yuxiong He. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models, 2023. +[34] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024. + +[35] Ziheng Jiang, Haibin Lin, Yinmin Zhong, Qi Huang, Yangrui Chen, Zhi Zhang, Yanghua Peng, Xiang Li, Cong Xie, Shibiao Nong, Yulu Jia, Sun He, Hongmin Chen, Zhihao Bai, Qi Hou, Shipeng Yan, Ding Zhou, Yiyao Sheng, Zhuo Jiang, Haohan Xu, Haoran Wei, Zhang Zhang, Pengfei Nie, Leqi Zou, Sida Zhao, Liang Xiang, Zherui Liu, Zhe Li, Xiaoying Jia, Jianxi Ye, Xin Jin, and Xin Liu. Megascale: Scaling large language model training to more than 10,000 gpus, 2024. +[36] Cameron R Jones and Benjamin K Bergen. Large language models pass the Turing test. arXiv preprint arXiv:2503.23674, 2025. +[37] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. ArXiv, abs/1705.03551, 2017. +[38] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. +[39] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022. +[40] Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Eduardo Blanco and Wei Lu, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. +[41] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc V. Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019. +[42] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. Race: Large-scale reading comprehension dataset from examinations. ArXiv, abs/1704.04683, 2017. +[43] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training. CoRR, abs/2006.15704, 2020. +[44] Shenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, and Yang You. Sequence parallelism: Long sequence training from system perspective. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2391-2404, Toronto, Canada, July 2023. Association for Computational Linguistics. +[45] Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. From live data to high-quality benchmarks: The arena-hard pipeline, April 2024. +[46] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434, 2024. +[47] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. In EMNLP (1), pages 5747-5763. Association for Computational Linguistics, 2020. +[48] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +[49] MAA. Codeforces. American Invitational Mathematics Examination - AIME 2024, 2024. https://maa.org/math-competitions/american-invitational-mathematics-examination-aime. +[50] William Merrill and Ashish Sabharwal. A little depth goes a long way: The expressive power of log-depth transformers, 2025. +[51] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, Huntsville, ON, Canada, October 27-30, 2019, pages 1-15. ACM, 2019. +[52] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training ongpu clusters using megatron-lm. In + +Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '21, New York, NY, USA, 2021. Association for Computing Machinery. +[53] Toan Q Nguyen and Julian Salazar. Transformers without tears: Improving the normalization of self-attention. arXiv preprint arXiv:1910.05895, 2019. +[54] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and R. Fernández. The lambada dataset: Word prediction requiring a broad discourse context. ArXiv, abs/1606.06031, 2016. +[55] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models, 2023. +[56] Penghui Qi, Xinyi Wan, Guangxing Huang, and Min Lin. Zero bubble pipeline parallelism, 2023. +[57] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. +[58] Samyam Rajbhandari, Jeff Rasley, Olatunj Ruwase, and Yuxiong He. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020. +[59] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024. +[60] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. +[61] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. Language models are multilingual chain-of-thought reasoners. ArXiv, abs/2210.03057, 2022. +[62] Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. 1999. +[63] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020. +[64] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023. +[65] Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. In Annual Meeting of the Association for Computational Linguistics, 2022. +[66] Sho Takase, Shun Kiyono, Sosuke Kobayashi, and Jun Suzuki. Spike no more: Stabilizing the pre-training of large language models, 2024. +[67] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. +[68] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. +[69] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In Anna Korhonen, David Traum, and Lluis Márquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy, July 2019. Association for Computational Linguistics. +[70] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max W.F. Ku, Kai Wang, Alex Zhuang, Rongqi "Richard" Fan, Xiang Yue, and Wenhu Chen. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. *ArXiv*, abs/2406.01574, 2024. +[71] Tianwen Wei, Jian Luan, W. Liu, Shuang Dong, and Bin Quan Wang. Cmath: Can your language model pass chinese elementary school math test? ArXiv, abs/2306.16636, 2023. + +[72] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024. +[73] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process, 2024. +[74] Mingjia Yin, Chuhan Wu, Yufei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong Liu, Ruiming Tang, Defu Lian, and Enhong Chen. Entropy law: The story behind data compression and llm performance. arXiv preprint arXiv:2407.06645, 2024. +[75] Mengxia Yu, De Wang, Qi Shan, Colorado Reed, and Alvin Wan. The super weight in large language models. ArXiv, abs/2411.07191, 2024. +[76] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics, 2019. +[77] Biao Zhang and Rico Sennrich. Root mean square layer normalization, 2019. + +# A Contributions and Acknowledgments + +Core Contributors Yichun Yin, Wenyong Huang, Kaikai Song, Yehui Tang, Xueyu Wu, Wei Guo, Peng Guo, Yaoyuan Wang, Xiaojun Meng, Yasheng Wang, Dong Li, Can Chen, Dandan Tu, Yin Li, Fisher Yu, Ruiming Tang, Yunhe Wang + +Contributors Baojun Wang, Bin Wang, Bo Wang, Boxiao Liu, Changzheng Zhang, Duyu Tang, Fei Mi, Hui Jin, Jiansheng Wei, Jiarui Qin, Jinpeng Li, Jun Zhao, Liqun Deng, Lin Li, Minghui Xu, Naifu Zhang, Nianzu Zheng, Qiang Li, Rongju Ruan, Shengjun Cheng, Tianyu Guo, Wei He, Wei Li, Weiwen Liu, Wulong Liu, Xinyi Dai, Yonghan Dong, Yu Pan, Yue Li, Yufei Wang, Yujun Li, Yunsheng Ni, Zhe Liu, Zhenhe Zhang, Zhicheng Liu \ No newline at end of file diff --git a/data/2025/2504_07xxx/2504.07866/images/03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg b/data/2025/2504_07xxx/2504.07866/images/03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b3bcd2cda9d04351a51dec59242606d800595d45 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53627278474b30c8e1d8ed88a8eb4e1b50ef78af0003d74b3579bd86f860efdf +size 11088 diff --git a/data/2025/2504_07xxx/2504.07866/images/13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg b/data/2025/2504_07xxx/2504.07866/images/13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5717f6a9dc8b9e18da153c1584b245c3d644fbb0 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d10cebf40d89f3d20c25b3d6d627a31724986eaa1186e3d66d698e3f11968624 +size 12353 diff --git a/data/2025/2504_07xxx/2504.07866/images/16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg b/data/2025/2504_07xxx/2504.07866/images/16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..03c3ed657c7c57b38bc48c6c1823eb422e6044c6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb8a7eec1da5c459b356d7e05bbe9975a10feea30b0c2debc6312658c0c2c95d +size 14122 diff --git a/data/2025/2504_07xxx/2504.07866/images/1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg b/data/2025/2504_07xxx/2504.07866/images/1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a6abbe6c9e95d1e76f36d4595b99db0aa32c9b40 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f3069cc5d16aae5772a7a5a87cbeef8f9730f25b9efe8dcb6cc8199e797a5ca1 +size 21398 diff --git a/data/2025/2504_07xxx/2504.07866/images/36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg b/data/2025/2504_07xxx/2504.07866/images/36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d1bf159b3ebbbae87f4f5c5b7e30c9d5bba9dfb2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c56b63c5e148fea0ae66e07e27734e75af087a6d7137136dde928dd30b607389 +size 11560 diff --git a/data/2025/2504_07xxx/2504.07866/images/381a61c48d4873ff6cc62083bf48d8b583cd52cc7d46518e58419af2a4afcd0a.jpg b/data/2025/2504_07xxx/2504.07866/images/381a61c48d4873ff6cc62083bf48d8b583cd52cc7d46518e58419af2a4afcd0a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8823819fe9dd1a6007c4369c1e3379139a9ba699 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/381a61c48d4873ff6cc62083bf48d8b583cd52cc7d46518e58419af2a4afcd0a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48e4c83100884e114c5fb902b532a53ffc5da8a5b2df5d1d02e2f1e25f4e5df0 +size 33755 diff --git a/data/2025/2504_07xxx/2504.07866/images/3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg b/data/2025/2504_07xxx/2504.07866/images/3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cc3a0cfd74f86110a1410fc4fc23c6927e146805 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5625e0da14056fea82b0f5ec421cc5110ae8f3b7c886bd45baf0df950836b2c2 +size 81031 diff --git a/data/2025/2504_07xxx/2504.07866/images/44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg b/data/2025/2504_07xxx/2504.07866/images/44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3f10c5f0aeb9f4af8aa39eb9d2cdac170a3c2b7f --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b040416f777dcdaccd2943213a92cecfc6b5449e84849c69a9fe44dc5f76d45d +size 15553 diff --git a/data/2025/2504_07xxx/2504.07866/images/4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg b/data/2025/2504_07xxx/2504.07866/images/4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..30220669848349198613d395bef1037dc9eb02ea --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88657d1003e95df6629ec22c6203df328d984794ffd9d29a1832326bd83a2095 +size 49047 diff --git a/data/2025/2504_07xxx/2504.07866/images/4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg b/data/2025/2504_07xxx/2504.07866/images/4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..641f4d865e11eaaaadfc854276479cf7e65631e5 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67138941f9d3c4e3741dea7e63004fac9a48096e5a03787f62201ea52b23268c +size 10169 diff --git a/data/2025/2504_07xxx/2504.07866/images/5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg b/data/2025/2504_07xxx/2504.07866/images/5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d1445b54b433d16f7cc984a6f16f2806c7be492e --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57862afe2e87190a2160eedfaad20e359011edd3a8ce0616063c93e0d315a19d +size 14369 diff --git a/data/2025/2504_07xxx/2504.07866/images/54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg b/data/2025/2504_07xxx/2504.07866/images/54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3a74e723ba607c55ebfcb39aff1038adec286bec --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be43ddb413411822cfb86051ba74e42837e12a82b632a2efaf171f5e00ade43d +size 14124 diff --git a/data/2025/2504_07xxx/2504.07866/images/67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg b/data/2025/2504_07xxx/2504.07866/images/67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7dee5f80469bb3b34d63ae3ee099e877d2a20ca4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cca11dc74a9344af35f607a84c0e2a2abb6efc9938a622b2f4309a14520cbe9 +size 27851 diff --git a/data/2025/2504_07xxx/2504.07866/images/688e6f49bbae37cf3b66fea8df45d115891068481814963c9c72a37797b11531.jpg b/data/2025/2504_07xxx/2504.07866/images/688e6f49bbae37cf3b66fea8df45d115891068481814963c9c72a37797b11531.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e1e866969f4f33aeec72b43c8cecf21ba05c6237 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/688e6f49bbae37cf3b66fea8df45d115891068481814963c9c72a37797b11531.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31591bc54a48b750f8ebc6a39510c536f504f694f862bd7a331835ee094b1d15 +size 172472 diff --git a/data/2025/2504_07xxx/2504.07866/images/6a81bc754358c61ca73a33d2cb633c9cb433eeb8b601d84ad3ea2a99f5e50d84.jpg b/data/2025/2504_07xxx/2504.07866/images/6a81bc754358c61ca73a33d2cb633c9cb433eeb8b601d84ad3ea2a99f5e50d84.jpg new file mode 100644 index 0000000000000000000000000000000000000000..34e8c4557379a02161e394de2d35238f0e413e84 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/6a81bc754358c61ca73a33d2cb633c9cb433eeb8b601d84ad3ea2a99f5e50d84.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0ae1228f594588c84158cce5018df5f1a583b8ace4361d3e592d6ab9b8d3045 +size 30816 diff --git a/data/2025/2504_07xxx/2504.07866/images/744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg b/data/2025/2504_07xxx/2504.07866/images/744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg new file mode 100644 index 0000000000000000000000000000000000000000..414f96ee90574f574158b9db8cefe4f23c9f0b00 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50b74f2848cb2e88d78522065b3e9f88ab7f300bc57b2f2fa9b0ec73d93b590f +size 13588 diff --git a/data/2025/2504_07xxx/2504.07866/images/7919ae6ab8d9a21337ecd5d2e2e396908f906d778d62011a971c810fa5816360.jpg b/data/2025/2504_07xxx/2504.07866/images/7919ae6ab8d9a21337ecd5d2e2e396908f906d778d62011a971c810fa5816360.jpg new file mode 100644 index 0000000000000000000000000000000000000000..38de7d03cc6619532bf714acfd5ab9e7d881fdac --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/7919ae6ab8d9a21337ecd5d2e2e396908f906d778d62011a971c810fa5816360.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19a49492e6d69d99420ffdcfbc572baddf704c40ec3424767242b80ed029a898 +size 56362 diff --git a/data/2025/2504_07xxx/2504.07866/images/80f81f5cf4a096dc4a1fdd435b54fa30a25cab4606e09472fa9143185fab1a03.jpg b/data/2025/2504_07xxx/2504.07866/images/80f81f5cf4a096dc4a1fdd435b54fa30a25cab4606e09472fa9143185fab1a03.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5d98a3a1d1a9cc130382b8f7c671af6561b15fd4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/80f81f5cf4a096dc4a1fdd435b54fa30a25cab4606e09472fa9143185fab1a03.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a59b7ffa9752850e360745d0da9031999d0729d9c848a501fd66e08bf8bcd2d +size 6802 diff --git a/data/2025/2504_07xxx/2504.07866/images/8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg b/data/2025/2504_07xxx/2504.07866/images/8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7b47f31dc5b4a0389a90b64f66eb44cc7fea361a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a92c5fd8f94ca7e363624d5e2ad33f83ae1242787538a176092b53d119bad30c +size 14953 diff --git a/data/2025/2504_07xxx/2504.07866/images/8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg b/data/2025/2504_07xxx/2504.07866/images/8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a9d7b923da8974b6073d268bb5316035884a490c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fe1ce203dabbc1be47a3360052da9c01b9a4b03d245deab43e31d16f0b52b63 +size 20033 diff --git a/data/2025/2504_07xxx/2504.07866/images/9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg b/data/2025/2504_07xxx/2504.07866/images/9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a841f2bcb71b7e2dc4f6118ddc2f6471100ad154 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e9d6d69cabed2794c783ecb0652081f7e24e575105abcfb59d75bd93b5832a8c +size 34484 diff --git a/data/2025/2504_07xxx/2504.07866/images/a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg b/data/2025/2504_07xxx/2504.07866/images/a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b8e160a3760100b1adc258bbaf30f8f850f2d6b7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78943c7662d05fdb64d850f5257a2aa21764a4797181f01165b2a5db2ffe1e85 +size 15040 diff --git a/data/2025/2504_07xxx/2504.07866/images/a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg b/data/2025/2504_07xxx/2504.07866/images/a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4f18a2c13830bb017d5cb51465b6f181033b8fe9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fd3cf2d830613c05a30b5078aef0087bf7502ecd885994fb205df1032b7ef91 +size 13751 diff --git a/data/2025/2504_07xxx/2504.07866/images/a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg b/data/2025/2504_07xxx/2504.07866/images/a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b34101eed71cca17fc7295be2e9becacafc462cc --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7fb52a0cc258ed7f5b36a9e5b12485ba9fac03975f901782060091a2ca9f5082 +size 25022 diff --git a/data/2025/2504_07xxx/2504.07866/images/b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg b/data/2025/2504_07xxx/2504.07866/images/b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5f13c1e370f34a6c4c3debba4ca91ef8cdb7ab8c --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:543a3219fc3b5a98d7117bf87657728863f3e82da1bc9f4cdbf0781c67e2d9ed +size 11569 diff --git a/data/2025/2504_07xxx/2504.07866/images/bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg b/data/2025/2504_07xxx/2504.07866/images/bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..66c89ea3bbb099496856b1ad1601e1867806b5c9 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14628cbe2cc339bd0aa1a792bec105f409f7f04aee576f5d58234c3c3bb19e39 +size 14324 diff --git a/data/2025/2504_07xxx/2504.07866/images/bdbab18d054876503d9911625ccd93b413947f5c0cf2e0dc1198f3ecd08db00e.jpg b/data/2025/2504_07xxx/2504.07866/images/bdbab18d054876503d9911625ccd93b413947f5c0cf2e0dc1198f3ecd08db00e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f447759906c4bc74359b8e709b98d814fb720838 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/bdbab18d054876503d9911625ccd93b413947f5c0cf2e0dc1198f3ecd08db00e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:285eb7ac6a47c7d9805e1c1f614dfd8d60db4b536f7bce36da1c31a1fcaefff1 +size 23855 diff --git a/data/2025/2504_07xxx/2504.07866/images/c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg b/data/2025/2504_07xxx/2504.07866/images/c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3640d635103d51f8b27523c55358b60b5fc154dc --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c8b01a20657151e09231bd3c1e56c14ae30c3af574c04f035788365b87d4499 +size 13163 diff --git a/data/2025/2504_07xxx/2504.07866/images/cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg b/data/2025/2504_07xxx/2504.07866/images/cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..06af99bcb38c6c5643dbdd9216f0760d9796475a --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34010b9dc424aa616501824402fe96420de1c28c7b655ba0eaca8d18a02b77db +size 13083 diff --git a/data/2025/2504_07xxx/2504.07866/images/d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg b/data/2025/2504_07xxx/2504.07866/images/d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b02bce78206db2104c417cd8be45ebb03c4a79d7 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2ee8d5a56532a20872a244cddf035518c171a4655d64cd153d62053c449851e +size 12406 diff --git a/data/2025/2504_07xxx/2504.07866/images/ddeb66c0c53a04620a9024e649a768af364f969ed8370e1e8f205f5824420062.jpg b/data/2025/2504_07xxx/2504.07866/images/ddeb66c0c53a04620a9024e649a768af364f969ed8370e1e8f205f5824420062.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7908553b1d5b3e5e619bcd935630d7a6882630f4 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/ddeb66c0c53a04620a9024e649a768af364f969ed8370e1e8f205f5824420062.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfca0e345f3504928e2d655bf89784274ff42b90cde6f35b54209fc3a14f5336 +size 7401 diff --git a/data/2025/2504_07xxx/2504.07866/images/dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg b/data/2025/2504_07xxx/2504.07866/images/dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..92183829896cde19dcf3ff945451db162dedc9d6 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b93ed379b514fda5e28233d1c1426cb1d8e905e74f0802911173792c5899112 +size 10722 diff --git a/data/2025/2504_07xxx/2504.07866/images/e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg b/data/2025/2504_07xxx/2504.07866/images/e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ca04993ebc48e2aca171691ae219835e880ed490 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9fda0abfc56fd4aac176d738013d7a68b993329db2fa5211d97ca18c250ba31 +size 13660 diff --git a/data/2025/2504_07xxx/2504.07866/images/eb74fadcfd096c8aaeae1875a83d62eb78cc6b37c897972dd151607c90ddf109.jpg b/data/2025/2504_07xxx/2504.07866/images/eb74fadcfd096c8aaeae1875a83d62eb78cc6b37c897972dd151607c90ddf109.jpg new file mode 100644 index 0000000000000000000000000000000000000000..90c9fcc1dc9f851f4c4fc20d425822dbd6d817ed --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/eb74fadcfd096c8aaeae1875a83d62eb78cc6b37c897972dd151607c90ddf109.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faf3388726b927fb988924dd24caaf47c3260a2e291f696e1dad02dd046f6c00 +size 26852 diff --git a/data/2025/2504_07xxx/2504.07866/images/f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg b/data/2025/2504_07xxx/2504.07866/images/f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c7125670ca879d0c9f3e1df626bfb9d9be4a84ae --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99e318fe513e6aed4dd5d59db51b8e83f110aa15f7b4ec2f67ebdb38e68854a7 +size 31987 diff --git a/data/2025/2504_07xxx/2504.07866/images/fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg b/data/2025/2504_07xxx/2504.07866/images/fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg new file mode 100644 index 0000000000000000000000000000000000000000..79de09ed377ad5185a9cf569a22ba4a8a5bdf434 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7c81b2633b37550fdaca3c95696cacb97198212393813ba15e410342493d051 +size 14450 diff --git a/data/2025/2504_07xxx/2504.07866/images/ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg b/data/2025/2504_07xxx/2504.07866/images/ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..af8869cbe19df9a104c81e22922cdf532583a637 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/images/ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cccdaab88b16a2a771e5117df730201c889aaf24a0ad1447c04294851024c491 +size 14853 diff --git a/data/2025/2504_07xxx/2504.07866/layout.json b/data/2025/2504_07xxx/2504.07866/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..620c8d1910533ec1e986738428a01a1311d5f4e2 --- /dev/null +++ b/data/2025/2504_07xxx/2504.07866/layout.json @@ -0,0 +1,11892 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 98, + 81, + 512, + 119 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 98, + 81, + 512, + 119 + ], + "spans": [ + { + "bbox": [ + 98, + 81, + 512, + 119 + ], + "type": "text", + "content": "PANGU ULTRA: PUSHING THE LIMITS OF DENSE LARGE LANGUAGE MODELS ON ASCEND NPUS" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 261, + 142, + 350, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 261, + 142, + 350, + 153 + ], + "spans": [ + { + "bbox": [ + 261, + 142, + 350, + 153 + ], + "type": "text", + "content": "Pangu Team, Huawei" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 252, + 163, + 357, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 163, + 357, + 175 + ], + "spans": [ + { + "bbox": [ + 252, + 163, + 357, + 175 + ], + "type": "text", + "content": "PanguTech@huawei.com" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 275, + 209, + 335, + 220 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 275, + 209, + 335, + 220 + ], + "spans": [ + { + "bbox": [ + 275, + 209, + 335, + 220 + ], + "type": "text", + "content": "ABSTRACT" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 122, + 232, + 487, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 122, + 232, + 487, + 397 + ], + "spans": [ + { + "bbox": [ + 122, + 232, + 487, + 397 + ], + "type": "text", + "content": "We present Pangu Ultra, a Large Language Model (LLM) with 135 billion parameters and dense Transformer modules trained on Ascend Neural Processing Units (NPUs). Although the field of LLM has been witnessing unprecedented advances in pushing the scale and capability of LLM in recent years, training such a large-scale model still involves significant optimization and system challenges. To stabilize the training process, we propose depth-scaled sandwich normalization, which effectively eliminates loss spikes during the training process of deep models. We pre-train our model on 13.2 trillion diverse and high-quality tokens and further enhance its reasoning capabilities during post-training. To perform such large-scale training efficiently, we utilize 8,192 Ascend NPUs with a series of system optimizations. Evaluations on multiple diverse benchmarks indicate that Pangu Ultra significantly advances the state-of-the-art capabilities of dense LLMs such as Llama 405B and Mistral Large 2, and even achieves competitive results with DeepSeek-R1, whose sparse model structure contains much more parameters. Our exploration demonstrates that Ascend NPUs are capable of efficiently and effectively training dense models with more than 100 billion parameters. Our model and system will be available for our commercial customers." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 87, + 412, + 174, + 425 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 412, + 174, + 425 + ], + "spans": [ + { + "bbox": [ + 87, + 412, + 174, + 425 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 86, + 437, + 523, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 437, + 523, + 525 + ], + "spans": [ + { + "bbox": [ + 86, + 437, + 523, + 525 + ], + "type": "text", + "content": "Large Language Models (LLMs) have transformed the landscape and our understanding of Artificial Intelligence. Their remarkable capabilities are enabling more and more AI applications, bringing numerous commercial opportunities. Unsurprisingly, teams are racing to push the scaling law to create models with more and more parameters. Although the Transformer [68] structure is a popular choice for large models, it is still debatable whether the models should be sparse or dense. With more than 100 billion parameters, sparse architectures powered by Mixture of Experts (MoE), such as DeepSeek [46, 19], have demonstrated surreal human-like language and thinking abilities [36], which makes sparse models a popular choice when pushing the limit of LLMs." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 86, + 530, + 523, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 530, + 523, + 618 + ], + "spans": [ + { + "bbox": [ + 86, + 530, + 523, + 618 + ], + "type": "text", + "content": "At the same time, dense models, such as the Qwen [11, 72], Llama [25], and Gemma [67] series, are currently popular among models with fewer than 100 billion parameters thanks to their strong performance in specific skills and ease of deployment. The parameters in dense models are usually easier to optimize, while the dynamic components in sparse models usually need to turn to additional heuristics for stable training. In addition, the dense model structures at inference time make it easier to optimize system performance due to deterministic parameter usage. In this study, we aim to further explore the potential of dense models at large scales and show the performance of dense models can be on par with state-of-the-art MoE models on diverse tasks." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 623, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 623, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 623, + 523, + 723 + ], + "type": "text", + "content": "The numbers of model parameters and layers are two crucial dimensions to release the full potential of dense models. While model parameter count is critical for model performance and plays a central role in scaling laws [38], recent studies [73, 50] suggest that model depth has a significant impact on reasoning capabilities. However, our exploration in those two aspects poses significant challenges in exploring the limits of those two aspects. Deeper models usually introduce unstable training, manifested as spikes in training loss curves. Experimental observations suggest that those spikes can knock our model out of the ideal parameter landscape and cause irreparable damage to the training process. Meanwhile, training hundreds of billions of parameters in dense models requires orchestrating thousands of AI processors, which poses significant system efficiency challenges." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 89, + 31, + 141, + 51 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 31, + 141, + 51 + ], + "spans": [ + { + "bbox": [ + 89, + 31, + 141, + 51 + ], + "type": "text", + "content": "Pangu" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 441, + 44, + 522, + 54 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 441, + 44, + 522, + 54 + ], + "spans": [ + { + "bbox": [ + 441, + 44, + 522, + 54 + ], + "type": "text", + "content": "TECHNICAL REPORT" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 14, + 219, + 37, + 568 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 219, + 37, + 568 + ], + "spans": [ + { + "bbox": [ + 14, + 219, + 37, + 568 + ], + "type": "text", + "content": "arXiv:2504.07866v2 [cs.CL] 11 Apr 2025" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 72, + 523, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 72, + 523, + 193 + ], + "spans": [ + { + "bbox": [ + 88, + 72, + 523, + 193 + ], + "type": "text", + "content": "For our exploration, we introduce Pangu Ultra, a dense Transformer architecture with 135 billion parameters and 94 layers. The model setup is at the forefront scale of the top performing dense models [11, 72, 25, 67]. Regarding challenges of training deep models, we hypothesize that the loss spikes are due to gradient fluctuations, which in turn hinder convergence rates and may lead to training divergence. Therefore, we propose two techniques, the depth-scaled sandwich norm and tiny initialization, both of which are designed to maintain stable gradient norms. Specifically, we first replace pre-layer norm [47] with the sandwich norm [20] and scaled initialization values in the post-layer normalization based on the model's depth. This depth-based adjustment helps control the range of gradient fluctuations effectively. In addition, we scale the standard deviation of weight initialization according to the model's width and depth, leading to tiny initialization. These two techniques lead to more stable gradients throughout the training process, eliminating loss spikes during the training of Pangu Ultra, and improving overall model performance." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 198, + 523, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 198, + 523, + 285 + ], + "spans": [ + { + "bbox": [ + 88, + 198, + 523, + 285 + ], + "type": "text", + "content": "In practice, we pre-train Pangu Ultra on 13.2 trillion tokens of our built corpus. In the pre-training stage, we use three phrases of data corpus each with a distinct data recipe. The design principles behind three phrases are first to help the model develop knowledge and linguistics, and then to directly equip it with reasoning ability, and finally to boost it on actively learning to reason. The model context window is gradually extended from 4K to 128K. In the post-training stage, we begin with applying efficient supervised fine-tuning (SFT) for a cold start, utilizing a carefully curated set of instruction data. Following this, Pangu Ultra undergoes further optimization through Reinforcement Learning (RL). The overall training of Pangu Ultra is stable in this process." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "spans": [ + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "text", + "content": "To handle large-scale model training of more than 100 billion parameters, we utilize a large-scale computing cluster consisting of 8,192 Ascend NPUs and employ a series of system optimization to improve the system efficiency. The primary challenge is minimizing pipeline bubbles [29] at large scales, which arise due to batch size constraints [35]. We take advantage of the typical 4 types of parallelism on our Ascend cluster, that is, Data Parallelism (DP), Tensor Parallelism (TP) [63], Sequence Parallelsim [39] and Pipeline Parallelism (PP) [30, 51]. As the training cluster scales up, the mini-batch size allocated to each DP decreases, leading to an increased pipeline bubble ratio. To mitigate this issue, we employ additional virtual pipeline (VPP) scheduling [52] with fine-grained tuning to ensure load balancing and reduce the PP bubble ratio from " + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "inline_equation", + "content": "30.45\\%" + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "inline_equation", + "content": "6.8\\%" + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "text", + "content": ". The second challenge is to achieve high training efficiency for long sequences. Both attention mask generation and self-attention computation are time- and memory-intensive, particularly for long contexts. We utilize a NPU Fusion Attention (NFA) operator [4, 18, 17] tailored for the Ascend NPUs, which supports reset attention mask scenarios and eliminates the need to construct the attention mask before calling the NFA, thus improving computational efficiency and reducing memory cost. Under the implementation of several fine-grained system optimization, we achieve a Model FLOPs Utilization (MFU) [14] of over " + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 88, + 290, + 523, + 455 + ], + "type": "text", + "content": " when training Pangu Ultra on 8,192 Ascend NPUs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 460, + 523, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 460, + 523, + 515 + ], + "spans": [ + { + "bbox": [ + 88, + 460, + 523, + 515 + ], + "type": "text", + "content": "On public evaluation benchmarks, Pangu Ultra outperforms existing dense LLMs including Llama 405B and Mistral Large 2 123B on almost all major language tasks, and achieves competitive results with sparse models consisting of more than 500 billion parameters. These results indicate the potential of dense model capabilities is still promising to explore. Pangu Ultra also demonstrates that the Ascend NPUs are suitable for exploring the full capabilities of large-scale dense language models." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 88, + 533, + 208, + 545 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 533, + 208, + 545 + ], + "spans": [ + { + "bbox": [ + 88, + 533, + 208, + 545 + ], + "type": "text", + "content": "2 Model Architecture" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 88, + 559, + 523, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 559, + 523, + 603 + ], + "spans": [ + { + "bbox": [ + 88, + 559, + 523, + 603 + ], + "type": "text", + "content": "The basic architecture of Pangu Ultra is similar to Llama 3 [25]. It has 135 billion parameters with a hidden dimension of 12,288, a SwiGLU [60] feed-forward network (FFN) intermediate size of 28,672, and 94 layers. The attention blocks in Pangu Ultra leverage Group Query Attention (GQA) to reduce KV-cache size by incorporating 96 query heads and 8 KV heads." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 608, + 523, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 608, + 523, + 652 + ], + "spans": [ + { + "bbox": [ + 88, + 608, + 523, + 652 + ], + "type": "text", + "content": "There are two crucial differences to address the fundamental challenges of training stability and convergence in large dense LLMs. We propose Depth-Scaled Sandwich-Norm to replace the layer normalization and TinyInit for parameter initialization. By integrating these techniques, Pangu Ultra achieves substantial improvements over previous dense models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 88, + 667, + 241, + 679 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 667, + 241, + 679 + ], + "spans": [ + { + "bbox": [ + 88, + 667, + 241, + 679 + ], + "type": "text", + "content": "2.1 Depth-Scaled Sandwich-Norm" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 88, + 689, + 523, + 722 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 689, + 523, + 722 + ], + "spans": [ + { + "bbox": [ + 88, + 689, + 523, + 722 + ], + "type": "text", + "content": "Large-scale dense models typically adopt deeper architectures [22], although MoE models usually scale in width [19]. However, increased depth introduces greater challenges in maintaining training stability. Given the prohibitive cost of pre-training, stable training of large dense LLMs becomes paramount. Pre-Layer" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 86, + 72, + 523, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 72, + 523, + 106 + ], + "spans": [ + { + "bbox": [ + 86, + 72, + 523, + 106 + ], + "type": "text", + "content": "Normalization (Pre-LN) has been found to make back-propagation more efficient for deep Transformers [69], leading to its widespread adoption in Transformer-based large language model (LLM) architectures [22, 11, 19]." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 110, + 523, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 110, + 523, + 167 + ], + "spans": [ + { + "bbox": [ + 86, + 110, + 523, + 167 + ], + "type": "text", + "content": "However, in models employing the pre-LN structure, the fluctuating output scale of each sub-layer can easily lead to training instability [66]. To address this issue, sandwich-norm [20] applies an layer normalization to each sub-layer's output prior to the residual connection. While the sandwich-norm maintains the scale stability of individual sub-layer outputs, the progressive accumulation of output norms via residual connections across multiple layers may nevertheless lead to training instability." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 171, + 523, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 171, + 523, + 228 + ], + "spans": [ + { + "bbox": [ + 86, + 171, + 523, + 228 + ], + "type": "text", + "content": "To mitigate this, we present the depth-scaled sandwich norm, which integrates the sandwich norm with a depth-scaled initialization scheme. The layer normalization regulates layer-wise output magnitudes through trainable gamma parameters, which are initialized with values scaled proportionally to the inverse of network depth. Figure 1 illustrates the differences between the depth-scaled sandwich-norm and pre-norm architectures. The formula of depth-scaled sandwich-norm is" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 188, + 240, + 523, + 267 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 240, + 523, + 267 + ], + "spans": [ + { + "bbox": [ + 188, + 240, + 523, + 267 + ], + "type": "interline_equation", + "content": "\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\text {a t t n}}, \\operatorname {A T T N} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\text {a t t n}} = \\frac {c _ {\\text {a t t n}}}{\\sqrt {L}}, \\tag {1}", + "image_path": "ddeb66c0c53a04620a9024e649a768af364f969ed8370e1e8f205f5824420062.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 190, + 264, + 416, + 288 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 264, + 416, + 288 + ], + "spans": [ + { + "bbox": [ + 190, + 264, + 416, + 288 + ], + "type": "interline_equation", + "content": "\\mathbf {h} \\leftarrow \\mathbf {h} + \\operatorname {N o r m} \\left(\\gamma_ {\\mathrm {m l p}}, \\operatorname {M L P} (\\operatorname {N o r m} (\\mathbf {h}))\\right), \\quad \\gamma_ {\\mathrm {m l p}} = \\frac {c _ {\\mathrm {m l p}}}{\\sqrt {L}},", + "image_path": "80f81f5cf4a096dc4a1fdd435b54fa30a25cab4606e09472fa9143185fab1a03.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "spans": [ + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": " is the number of layers, " + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{attn}}" + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{mlp}}" + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": " are set as the initial output standard deviations of the attention layer and feed-forward network (FFN) layer, respectively. For Pangu Ultra, we set " + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{attn}}" + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": " to 0.283 and " + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{mlp}}" + }, + { + "bbox": [ + 86, + 297, + 523, + 331 + ], + "type": "text", + "content": " to 0.432." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 150, + 347, + 245, + 485 + ], + "blocks": [ + { + "bbox": [ + 150, + 347, + 245, + 485 + ], + "lines": [ + { + "bbox": [ + 150, + 347, + 245, + 485 + ], + "spans": [ + { + "bbox": [ + 150, + 347, + 245, + 485 + ], + "type": "image", + "image_path": "4eb945e428bda4bc2842653b548c0700ebd7824c868c06942ff9fe8b6fda3cf9.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 494, + 525, + 539 + ], + "lines": [ + { + "bbox": [ + 86, + 494, + 525, + 539 + ], + "spans": [ + { + "bbox": [ + 86, + 494, + 525, + 539 + ], + "type": "text", + "content": "Figure 1: Structure comparison between Pre-Layer Norm (Pre-LN) and Depth-Scaled Sandwich-Norm (DSSN). DSSN applies normalization layers to both before and after the attention and FFN block, while Pre-LN only utilizes one normalization layer. DSSN also employs a depth-scaled initialization schema, which is not in the original sandwich norm." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 288, + 347, + 474, + 484 + ], + "blocks": [ + { + "bbox": [ + 288, + 347, + 474, + 484 + ], + "lines": [ + { + "bbox": [ + 288, + 347, + 474, + 484 + ], + "spans": [ + { + "bbox": [ + 288, + 347, + 474, + 484 + ], + "type": "image", + "image_path": "8c2b76136987d8146ba87fcdd40ec48bbd7f765998e79609c7c6138eeb85aad7.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 87, + 563, + 197, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 563, + 197, + 574 + ], + "spans": [ + { + "bbox": [ + 87, + 563, + 197, + 574 + ], + "type": "text", + "content": "2.2 Model Initialization" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "spans": [ + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "content": "Existing works [53] observe that model initialization plays a crucial role in training stability and performance. Transformer-based LLMs widely adopt small initialization[53], which initialize all the weight with a normal distribution of standard deviation " + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "inline_equation", + "content": "\\sqrt{\\frac{2}{5d}}" + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "content": " is the hidden dimension. It's also common practice to scale the weights of residual layers at initialization by a factor of " + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "inline_equation", + "content": "1 / \\sqrt{L}" + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "content": " [57], where " + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 86, + 583, + 523, + 640 + ], + "type": "text", + "content": " is the number of layers." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 644, + 523, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 644, + 523, + 696 + ], + "spans": [ + { + "bbox": [ + 86, + 644, + 523, + 696 + ], + "type": "text", + "content": "Our findings suggest that scaling initialization by both model depth and width, using " + }, + { + "bbox": [ + 86, + 644, + 523, + 696 + ], + "type": "inline_equation", + "content": "\\sqrt{\\frac{1}{2dL}}" + }, + { + "bbox": [ + 86, + 644, + 523, + 696 + ], + "type": "text", + "content": ", leads to faster loss convergence and improved performance on downstream tasks. We call this initialization method TinyInit. We hypothesize that TinyInit achieves more consistent parameter scales across the model, which may facilitate optimization and convergence." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "text", + "content": "Research [66] indicates that embedding layers require different initialization strategies compared to other layers. Specifically, maintaining the standard deviation of embedding weights close to 1 may enhance training" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 86, + 72, + 523, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 72, + 523, + 97 + ], + "spans": [ + { + "bbox": [ + 86, + 72, + 523, + 97 + ], + "type": "text", + "content": "stability. Our experimental results indicate that initializing with a standard deviation of 0.5 achieves good model performance." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 87, + 108, + 155, + 119 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 108, + 155, + 119 + ], + "spans": [ + { + "bbox": [ + 87, + 108, + 155, + 119 + ], + "type": "text", + "content": "2.3 Tokenizer" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 129, + 523, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 129, + 523, + 196 + ], + "spans": [ + { + "bbox": [ + 86, + 129, + 523, + 196 + ], + "type": "text", + "content": "The design of the tokenizer significantly impacts model performance. An optimal vocabulary balances domain coverage (handling diverse tasks such as text, math, and code) with efficiency (encoding data with fewer tokens). Common methods use Byte-Pair Encoding (BPE) [62] and SentencePiece [40] build vocabularies by directly computing word frequencies across the entire training dataset. However, this approach suffers from domain imbalance, as common domains such as general text dominate the vocabulary, while specialized domains such as math and code remain underrepresented due to their limited data volume." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 199, + 523, + 256 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 199, + 523, + 256 + ], + "spans": [ + { + "bbox": [ + 86, + 199, + 523, + 256 + ], + "type": "text", + "content": "Pangu Ultra adopts a domain-aware vocabulary strategy. We perform independent frequency analyses across multiple domains including general Chinese, general English, code, and mathematics, generating distinct domain-specific vocabularies. These vocabularies are then merged and de-duplicated to form a unified vocabulary of 153,376 unique tokens, maintaining balanced representation across domains while preserving overall compression efficiency. Table 1 summarizes the detailed token distribution across different domains." + } + ] + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 181, + 284, + 430, + 402 + ], + "blocks": [ + { + "bbox": [ + 167, + 272, + 443, + 283 + ], + "lines": [ + { + "bbox": [ + 167, + 272, + 443, + 283 + ], + "spans": [ + { + "bbox": [ + 167, + 272, + 443, + 283 + ], + "type": "text", + "content": "Table 1: Token distribution in the unified vocabulary of Pangu Ultra." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 181, + 284, + 430, + 402 + ], + "lines": [ + { + "bbox": [ + 181, + 284, + 430, + 402 + ], + "spans": [ + { + "bbox": [ + 181, + 284, + 430, + 402 + ], + "type": "table", + "html": "
DomainNumber of TokensPercentage (%)
English68,01744.35
Chinese41,05326.77
Other30,57319.93
Latin-based languages4,5072.94
Arabic2,7551.80
Korean2,7331.78
Mathematics2,1391.39
Japanese1,5991.04
Total153,376100.00
", + "image_path": "381a61c48d4873ff6cc62083bf48d8b583cd52cc7d46518e58419af2a4afcd0a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 87, + 430, + 188, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 430, + 188, + 443 + ], + "spans": [ + { + "bbox": [ + 87, + 430, + 188, + 443 + ], + "type": "text", + "content": "3 Model Training" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 86, + 454, + 525, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 454, + 525, + 499 + ], + "spans": [ + { + "bbox": [ + 86, + 454, + 525, + 499 + ], + "type": "text", + "content": "In this section, we present our training pipeline, which is similar to training state-of-the-art language models, e.g., DeepSeek-V3 [19] and Llama 3 [22]. The training process consists of three main stages: pre-training, long context extension, and post-training. Each stage has specific training strategies and data construction methods to gradually enhance the model capabilities." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 87, + 512, + 191, + 525 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 512, + 191, + 525 + ], + "spans": [ + { + "bbox": [ + 87, + 512, + 191, + 525 + ], + "type": "text", + "content": "3.1 Pre-training Stage" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 86, + 532, + 523, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 532, + 523, + 567 + ], + "spans": [ + { + "bbox": [ + 86, + 532, + 523, + 567 + ], + "type": "text", + "content": "We first introduce the data construction in the pre-training of Pangu Ultra, followed by the details of data verification. Then we elaborate the practical approach for the long context extension. The detailed pre-training hyper-parameters are finally presented." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 87, + 578, + 199, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 578, + 199, + 588 + ], + "spans": [ + { + "bbox": [ + 87, + 578, + 199, + 588 + ], + "type": "text", + "content": "3.1.1 Data Construction" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 596, + 523, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 596, + 523, + 662 + ], + "spans": [ + { + "bbox": [ + 86, + 596, + 523, + 662 + ], + "type": "text", + "content": "The pre-training corpus of Pangu Ultra contains high-quality and diverse 13.2T tokens produced by our tokenizer, as stated in Section 2.3. Table 2 shows the pre-training process is structured into three sequential phases: the general phase, the reasoning phase, and the annealing phase. These phases are designed to progressively develop general knowledge and linguistic capabilities, enhance reasoning skills, and further refine knowledge and behavior, respectively. The amount of data used in each phase is 12T, including 7.4T and 4.6T data in two distinct subphases, 0.8T, and 0.4T tokens." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "type": "text", + "content": "In the initial general training phase, we utilize a corpus focused on developing broad linguistic capabilities and general knowledge. This stage primarily consists of English and Chinese data collected from a diverse range of sources, including web pages, books, encyclopedias, etc. Data from the multilingual and various industrial domains is also incorporated. Based on our data quality assessment in Section 3.1.2, we perfer to use higher-quality data in the second sub-phrase than the first." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 194, + 89, + 414, + 171 + ], + "blocks": [ + { + "bbox": [ + 206, + 78, + 402, + 89 + ], + "lines": [ + { + "bbox": [ + 206, + 78, + 402, + 89 + ], + "spans": [ + { + "bbox": [ + 206, + 78, + 402, + 89 + ], + "type": "text", + "content": "Table 2: Data recipe of Pangu Ultra pre-training." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 194, + 89, + 414, + 171 + ], + "lines": [ + { + "bbox": [ + 194, + 89, + 414, + 171 + ], + "spans": [ + { + "bbox": [ + 194, + 89, + 414, + 171 + ], + "type": "table", + "html": "
DatasetGeneralReasoningAnnealing
General English54%14%21%
General Chinese13%6%20%
Multi-lingual8%4%3%
Instruction2%11%20%
Math6%28%18%
Code17%37%18%
", + "image_path": "eb74fadcfd096c8aaeae1875a83d62eb78cc6b37c897972dd151607c90ddf109.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 198, + 523, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 198, + 523, + 243 + ], + "spans": [ + { + "bbox": [ + 86, + 198, + 523, + 243 + ], + "type": "text", + "content": "In the second reasoning phase, we increase the proportion of high-quality and diverse mathematical and coding data—raising it to over " + }, + { + "bbox": [ + 86, + 198, + 523, + 243 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 86, + 198, + 523, + 243 + ], + "type": "text", + "content": " of the corpus to enhance the reasoning capabilities of Pangu Ultra. The coding data includes both pure code and mixed text-code samples. The math data also involves a lot of English and Chinese texts. Moreover, LLM-generated synthetic data is widely incorporated to enrich the corpus." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 247, + 523, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 247, + 523, + 302 + ], + "spans": [ + { + "bbox": [ + 86, + 247, + 523, + 302 + ], + "type": "text", + "content": "The third annealing phrase is designed to help the model consolidate and effectively apply the knowledge and reasoning skills acquired in the previous stages. Therefore, we place greater emphasis on instruction data, which accounts for approximately " + }, + { + "bbox": [ + 86, + 247, + 523, + 302 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 86, + 247, + 523, + 302 + ], + "type": "text", + "content": " of the corpus. We curate in-house question banks covering a wide range of topics and construct both short and long chain-of-thought (CoT) responses. These reasoning paths are carefully refined to ensure clarity and logical coherence." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 86, + 307, + 523, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 307, + 523, + 352 + ], + "spans": [ + { + "bbox": [ + 86, + 307, + 523, + 352 + ], + "type": "text", + "content": "Overall, the pre-training data for Pangu Ultra is carefully designed to ensure high quality, diversity, and minimal redundancy. We assign quality and difficulty labels to the data and adopt a curriculum-based sampling strategy for the reasoning data across all three phases—progressing from simpler examples to more complex ones throughout the training cycle." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 87, + 369, + 227, + 382 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 369, + 227, + 382 + ], + "spans": [ + { + "bbox": [ + 87, + 369, + 227, + 382 + ], + "type": "text", + "content": "3.1.2 Data Quality Assessment" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 86, + 392, + 523, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 392, + 523, + 415 + ], + "spans": [ + { + "bbox": [ + 86, + 392, + 523, + 415 + ], + "type": "text", + "content": "Data quality assessment plays a crucial role in enhancing the overall quality of the data. Training Pangu Ultra employs both rule-based heuristics and model-based evaluation to enhance data quality." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 86, + 418, + 523, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 418, + 523, + 485 + ], + "spans": [ + { + "bbox": [ + 86, + 418, + 523, + 485 + ], + "type": "text", + "content": "For model-based quality assessment, we leverage the Pangu series as the base model. To better align quality evaluation with human value judgments, we fine-tune the model using a manually annotated dataset. The fine-tuned evaluator is then applied to a large-scale pre-training corpus exceeding 10T tokens. Data samples are scored across multiple dimensions, including cleanliness, fluency, educational value, and richness. These annotated scores are then used in a prioritized sampling strategy, where higher-quality samples are assigned higher sampling probabilities." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 86, + 490, + 523, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 490, + 523, + 534 + ], + "spans": [ + { + "bbox": [ + 86, + 490, + 523, + 534 + ], + "type": "text", + "content": "To validate the effectiveness of our data quality assessment, we conducted an ablation study using a proxy model with 2.6 billion parameters. Empirical results show that, to achieve comparable performance, the model trained on low-scoring data required " + }, + { + "bbox": [ + 86, + 490, + 523, + 534 + ], + "type": "inline_equation", + "content": "1.6 \\times" + }, + { + "bbox": [ + 86, + 490, + 523, + 534 + ], + "type": "text", + "content": " more tokens than the one trained on high-quality high-scoring data. Therefore, high data quality is important for improving training efficiency." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 86, + 552, + 224, + 564 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 552, + 224, + 564 + ], + "spans": [ + { + "bbox": [ + 86, + 552, + 224, + 564 + ], + "type": "text", + "content": "3.1.3 Pre-training Parameters" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "spans": [ + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "text", + "content": "Pangu Ultra is trained using AdamW optimizer [48] with a weight decay of 0.1 and epsilon is set to " + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-8}" + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "text", + "content": ". The momentum parameters are set to " + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "inline_equation", + "content": "\\beta_{1} = 0.9" + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "inline_equation", + "content": "\\beta_{2} = 0.95" + }, + { + "bbox": [ + 86, + 574, + 523, + 619 + ], + "type": "text", + "content": ". The gradient clipping norm is set to 1.0. To improve the training stability and overall performance, the pre-training of Pangu Ultra is organized into the following phases:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "spans": [ + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "text", + "content": "0T-7.4T tokens The sequence length is set to 4K (RoPE base " + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "inline_equation", + "content": "= 1 \\times 10^{4}" + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "text", + "content": "). The batch size increases from 1,024 to 1,536 (at 1.2T) and 2,048 (at 1.9T). The increased batch size improves training efficiency and throughput. The learning rate follows a cosine decay from " + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-4}" + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-5}" + }, + { + "bbox": [ + 86, + 623, + 523, + 667 + ], + "type": "text", + "content": " with 4,000 warmup steps to ensure stable early training." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 86, + 672, + 522, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 672, + 522, + 694 + ], + "spans": [ + { + "bbox": [ + 86, + 672, + 522, + 694 + ], + "type": "text", + "content": "7.4T-12.0T tokens The sequence length remains at 4K with a batch size of 2,048. The learning rate is fixed at " + }, + { + "bbox": [ + 86, + 672, + 522, + 694 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-5}" + }, + { + "bbox": [ + 86, + 672, + 522, + 694 + ], + "type": "text", + "content": " in this phase." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "text", + "content": "12.0T-12.8T tokens The sequence length increases to 8K (RoPE base " + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "inline_equation", + "content": "= 1 \\times 10^{5}" + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "text", + "content": "). The batch size is reduced to 1,536. The learning rate decays from " + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "inline_equation", + "content": "1 \\times 10^{-5}" + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "inline_equation", + "content": "7.5 \\times 10^{-6}" + }, + { + "bbox": [ + 86, + 700, + 522, + 723 + ], + "type": "text", + "content": " using cosine scheduling." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 72, + 217, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 72, + 217, + 85 + ], + "spans": [ + { + "bbox": [ + 87, + 72, + 217, + 85 + ], + "type": "text", + "content": "3.2 Long Context Extension" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 97, + 523, + 141 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 97, + 523, + 141 + ], + "spans": [ + { + "bbox": [ + 86, + 97, + 523, + 141 + ], + "type": "text", + "content": "The ability of LLMs to understand long context inputs is critical in long-thinking process and practical applications. In the final stages of pre-training, Pangu Ultra is trained on long sequence data to support a maximum context length of 128K. The training consists of two progressive phases: the first phase expands the context length to 32K, and the second phase further expands it to 128K." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "spans": [ + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "content": "Rotary Position Embedding (RoPE) [64] is the core module for supporting ultra-long input sequences. Existing open-source LLMs typically extend context length by either increasing the base frequency in RoPE [64, 32] or by adopting methods such as YaRN [55, 22, 19]. Our findings show that both methods perform similarly well if the hyper-parameters are correctly chosen, and we adopt the increased base frequency method in Pangu Ultra. To determine the base frequency in RoPE for long-context extension, we evaluate the offline performance of \"Needle In A Haystack\" (NIAH) with different base frequencies at the target sequence length, and select the one with the best result. This ensures a relatively low initial loss in long-context training. In practice, the selected base frequency for " + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "inline_equation", + "content": "32\\mathrm{K}" + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "inline_equation", + "content": "1.6\\times 10^{6}" + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "content": ", and for " + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "inline_equation", + "content": "128\\mathrm{K}" + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "inline_equation", + "content": "2.56\\times 10^{7}" + }, + { + "bbox": [ + 86, + 146, + 523, + 245 + ], + "type": "text", + "content": ". Detailed hyper-parameters of Pangu Ultra long context training are summarized below:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "spans": [ + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "text", + "content": "8K to 32K phase The sequence length is expanded to 32K (RoPE base " + }, + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "inline_equation", + "content": "= 1.6 \\times 10^{6}" + }, + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "text", + "content": "). The batch size is 384 with a learning rate of " + }, + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "inline_equation", + "content": "7.5 \\times 10^{-6}" + }, + { + "bbox": [ + 86, + 249, + 523, + 274 + ], + "type": "text", + "content": ", matching the final learning rate from the previous post-training stage." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "spans": [ + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "text", + "content": "32K to 128K phase The sequence length is further expanded to " + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "inline_equation", + "content": "128\\mathrm{K}" + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "text", + "content": " (RoPE base " + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "inline_equation", + "content": "= 2.56 \\times 10^{7}" + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "text", + "content": "). The batch size is reduced to 96. The learning rate remains " + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "inline_equation", + "content": "7.5 \\times 10^{-6}" + }, + { + "bbox": [ + 86, + 277, + 523, + 301 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 87, + 324, + 216, + 337 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 324, + 216, + 337 + ], + "spans": [ + { + "bbox": [ + 87, + 324, + 216, + 337 + ], + "type": "text", + "content": "3.3 Post-training Alignment" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 86, + 349, + 523, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 349, + 523, + 384 + ], + "spans": [ + { + "bbox": [ + 86, + 349, + 523, + 384 + ], + "type": "text", + "content": "In the post-training stage, Pangu Ultra is aligned with human preferences through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). This stage focuses on constructing high-quality, diverse instruction data and designing scalable, efficient training strategies." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 87, + 406, + 200, + 418 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 406, + 200, + 418 + ], + "spans": [ + { + "bbox": [ + 87, + 406, + 200, + 418 + ], + "type": "text", + "content": "3.3.1 Post-training Data" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 86, + 429, + 523, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 429, + 523, + 496 + ], + "spans": [ + { + "bbox": [ + 86, + 429, + 523, + 496 + ], + "type": "text", + "content": "In constructing post-training data, we emphasize the data quality, diversity, and complexity. The data pool is curated from a wide range of domains and task types, including general question answering, AI-generated content (AIGC), text classification and analysis, programming, mathematics, logical reasoning, and tool usage. These tasks cover application areas such as finance, healthcare, and public services. Data sources span open-source instruction datasets, real-world industrial queries, and synthetic problems derived from the pre-training corpus." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 86, + 499, + 523, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 499, + 523, + 545 + ], + "spans": [ + { + "bbox": [ + 86, + 499, + 523, + 545 + ], + "type": "text", + "content": "To promote data diversity, data samples are selected along two orthogonal dimensions, guided by the entropy law [74]: domain and task type. Hierarchical tagging models with varying levels of granularity are used to support balanced data sampling. Data quality is managed through a combination of rule-based validation and model-based validation, which helps eliminate low-quality or ambiguous samples." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 548, + 523, + 584 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 548, + 523, + 584 + ], + "spans": [ + { + "bbox": [ + 86, + 548, + 523, + 584 + ], + "type": "text", + "content": "To better stimulate the reasoning capabilities of Pangu Ultra, a large portion of the post-training data, approximately six-sevenths, consists of reasoning tasks such as mathematics, coding, and logic. The post-training data covers a range of complexities, with a focus on moderately to highly challenging tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 87, + 605, + 215, + 618 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 605, + 215, + 618 + ], + "spans": [ + { + "bbox": [ + 87, + 605, + 215, + 618 + ], + "type": "text", + "content": "3.3.2 Post-training Strategy" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 86, + 628, + 523, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 628, + 523, + 663 + ], + "spans": [ + { + "bbox": [ + 86, + 628, + 523, + 663 + ], + "type": "text", + "content": "In the post-training stage, Pangu Ultra was first trained with SFT to establish preliminary instruction-following capabilities. Following SFT, we apply RL with outcome-based reward signals to further enhance reasoning, alignment, and instruction-following abilities of Pangu Ultra." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 667, + 523, + 723 + ], + "type": "text", + "content": "We implement a latency-tolerant reinforcement learning framework optimized for the Ascend infrastructure, which will be detailed in a future report. The framework enables efficient large-scale policy optimization on Ascend. To guide the RL process, we implement a hybrid reward system that provides task-specific feedback for mathematics, coding, and general problem-solving. This hybrid reward system combines deterministic reward signals and model-based evaluations to facilitate stable and efficient policy optimization." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 71, + 192, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 71, + 192, + 85 + ], + "spans": [ + { + "bbox": [ + 87, + 71, + 192, + 85 + ], + "type": "text", + "content": "4 Training System" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 97, + 525, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 97, + 525, + 154 + ], + "spans": [ + { + "bbox": [ + 86, + 97, + 525, + 154 + ], + "type": "text", + "content": "Training our Pangu Ultra with 135B parameters on 13.2 trillion tokens necessitates the need to ensure training stability and efficiency in large-scale computing cluster. In this section, we elaborate the details of our training system from two important perspectives: parallelization strategies and system-level optimization techniques, in Section 4.2 and Section 4.3. Overall, we achieve over " + }, + { + "bbox": [ + 86, + 97, + 525, + 154 + ], + "type": "inline_equation", + "content": "52\\%" + }, + { + "bbox": [ + 86, + 97, + 525, + 154 + ], + "type": "text", + "content": " Model FLOPs Utilization (MFU) when training Pangu Ultra on 8,192 Ascend NPUs." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 167, + 188, + 179 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 167, + 188, + 179 + ], + "spans": [ + { + "bbox": [ + 86, + 167, + 188, + 179 + ], + "type": "text", + "content": "4.1 Computing Setup" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 188, + 525, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 188, + 525, + 244 + ], + "spans": [ + { + "bbox": [ + 86, + 188, + 525, + 244 + ], + "type": "text", + "content": "A computing cluster with 8,192 Ascend Neural Processing Units (NPUs) [5, 6] is deployed to train Pangu Ultra. Each node in the cluster houses 8 NPUs, interconnected via Huawei Cache Coherence System (HCCS) using a full-mesh topology, and each device is equipped with 64GB Memory. Inter-node communication is facilitated through RDMA over Converged Ethernet (RoCE) fabric, leveraging 200 Gbps interconnects for communication between NPUs across different nodes." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 86, + 258, + 284, + 271 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 258, + 284, + 271 + ], + "spans": [ + { + "bbox": [ + 86, + 258, + 284, + 271 + ], + "type": "text", + "content": "4.2 Parallelism Strategies for Model Scaling" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "spans": [ + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": "In order to scale model training" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": ", we leverage a combination of different parallelism strategies to distributes the model across multiple NPUs, including Data Parallelism (DP) [43], Tensor Parallelism (TP) [63], Sequence Parallelism (SP) [39], and Pipeline Parallelism (PP) [30, 51]. For Pangu Ultra, 128-way DP with ZERO [58] is performed to reduce the memory cost of model parameters and the associated optimizer states. 8-way TP is applied to leverage the high intra-node bandwidth for efficient activation transfer, while 8-way PP is adopted to utilize inter-node connections, since it only requires transmitting activations at the partition boundaries. However, as mentioned in existing studies [35, 30, 51, 56], pipeline parallelism encounters severe PP bubbles when the training cluster scales up, primarily due to batch size constraints [35]. For one-forward-one-backward (1F1B) PP scheduling, the bubble ratio is defined as " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "\\frac{p - 1}{p - 1 + n}" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": " represents the number of pipeline stages and " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": " denotes the number of micro batches for every DP. The ratio represents the idle time of accelerators, as shown in Figure 2. A large-scale training cluster increases the number of DPs, which in turn reduces the number of micro batches assigned to each DP due to batch size constraints, leading to a significant increase in the bubble ratio. Therefore, minimizing bubble ratio is crucial for improving system efficiency. Under such circumstances, we employ interleaved pipeline-parallel scheduling with 6-way virtual PP stages on each device [52] and manage to reduce it from " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "30.45\\%" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "6.8\\%" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": ". Through careful tuning of load balancing across PP and VPP stages, we are able to achieve approximately " + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "inline_equation", + "content": "43\\%" + }, + { + "bbox": [ + 86, + 279, + 523, + 459 + ], + "type": "text", + "content": " MFU on an 8,192 NPU cluster as a baseline." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 130, + 472, + 476, + 658 + ], + "blocks": [ + { + "bbox": [ + 130, + 472, + 476, + 658 + ], + "lines": [ + { + "bbox": [ + 130, + 472, + 476, + 658 + ], + "spans": [ + { + "bbox": [ + 130, + 472, + 476, + 658 + ], + "type": "image", + "image_path": "3b34ebb39e3da6d7ff8bbd2f5b9f48782f7c2d993f79bfac7786efcf3d058b73.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 146, + 667, + 463, + 681 + ], + "lines": [ + { + "bbox": [ + 146, + 667, + 463, + 681 + ], + "spans": [ + { + "bbox": [ + 146, + 667, + 463, + 681 + ], + "type": "text", + "content": "Figure 2: Pipeline parallelism and the interleaved pipeline-parallel scheduling." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "text", + "content": "1The training of Pangu Ultra is supported by MindSpeed [8] and Megatron [7, 63] framework, which provides comprehensive parallel strategies and system optimization methods." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 72, + 202, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 72, + 202, + 85 + ], + "spans": [ + { + "bbox": [ + 87, + 72, + 202, + 85 + ], + "type": "text", + "content": "4.3 System Optimization" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "spans": [ + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "text", + "content": "Based on the optimizations outlined in Section 4.2 that achieved " + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "inline_equation", + "content": "43\\%" + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "text", + "content": " MFU, additional system-level enhancements are implemented to push training efficiency to new heights. Through a combination of kernel fusions, context parallelism via subsequence partitioning, data caching and sharing mechanisms, and other refinements, Pangu Ultra benefits from a significant improvement in training efficiency. These comprehensive optimizations enable the system to achieve over " + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "inline_equation", + "content": "52\\%" + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "text", + "content": " MFU, representing a " + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "inline_equation", + "content": "9\\%" + }, + { + "bbox": [ + 86, + 93, + 525, + 160 + ], + "type": "text", + "content": " relative improvement compared to the baseline configuration mentioned in Section 4.2." + } + ] + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 155, + 176, + 455, + 266 + ], + "blocks": [ + { + "bbox": [ + 155, + 176, + 455, + 266 + ], + "lines": [ + { + "bbox": [ + 155, + 176, + 455, + 266 + ], + "spans": [ + { + "bbox": [ + 155, + 176, + 455, + 266 + ], + "type": "image", + "image_path": "67cea2b063522f19f7edd1b30826bee1c89190c8a74a7264b4d7d481914b5d6b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 267, + 266, + 378, + 277 + ], + "lines": [ + { + "bbox": [ + 267, + 266, + 378, + 277 + ], + "spans": [ + { + "bbox": [ + 267, + 266, + 378, + 277 + ], + "type": "text", + "content": "(b) The MC2 implementation" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 86, + 288, + 522, + 312 + ], + "lines": [ + { + "bbox": [ + 86, + 288, + 522, + 312 + ], + "spans": [ + { + "bbox": [ + 86, + 288, + 522, + 312 + ], + "type": "text", + "content": "Figure 3: A Comparison of the default transformer computation and the MC2 method. Note that in actual training, communication and computation tasks are fused into a single kernel in MC2." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 87, + 335, + 180, + 346 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 335, + 180, + 346 + ], + "spans": [ + { + "bbox": [ + 87, + 335, + 180, + 346 + ], + "type": "text", + "content": "4.3.1 Kernel Fusion" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 86, + 356, + 523, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 356, + 523, + 400 + ], + "spans": [ + { + "bbox": [ + 86, + 356, + 523, + 400 + ], + "type": "text", + "content": "Kernel fusion is widely adopted in LLM training to enhance efficiency. It combines multiple operations into a single kernel, reducing the number of data accesses to global memory [17]. During the training phase of Pangu Ultra, key operators are fused, resulting in significant improvements in hardware utilization and overall training efficiency." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 86, + 405, + 523, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 405, + 523, + 504 + ], + "spans": [ + { + "bbox": [ + 86, + 405, + 523, + 504 + ], + "type": "text", + "content": "MC2 - Merged Compute and Communication Tensor parallelism, when combined with sequence parallelism, introduces All-Gather (AG) and Reduce-Scatter (RS) communication operations for exchanging input and output activations across distributed devices. This approach exhibits a direct dependency between matrix multiplication (MatMul) and AG/RS communications, which fundamentally constrains the overlapping of TP communication with computational workflows. The MC2 is implemented [2, 3] to tackle this challenge by fusing MatMul computations with communication operations. It decomposes large computation and communication tasks into fine-grained subtasks and employs pipelined execution to maximize overlap between communication and computation. Thus, MC2 significantly reduces communication latency and improves hardware utilization (Figure 3)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 86, + 509, + 525, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 509, + 525, + 575 + ], + "spans": [ + { + "bbox": [ + 86, + 509, + 525, + 575 + ], + "type": "text", + "content": "NPU Fusion Attention Training LLMs with long sequence length suffers from quadratic memory and computational requirements in self-attention mechanisms as sequence length grows. To address these challenges, Flash Attention (FA) has emerged as a standard technique in LLM training owing to its superior performance [18, 17]. Pangu Ultra leverages a self-attention fusion operator, called NPU Fusion Attention (NFA)[9], which is specifically optimized for Ascend NPUs, offering system-level improvements across a wide range of self-attention computation scenarios." + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 222, + 590, + 394, + 702 + ], + "blocks": [ + { + "bbox": [ + 222, + 590, + 394, + 702 + ], + "lines": [ + { + "bbox": [ + 222, + 590, + 394, + 702 + ], + "spans": [ + { + "bbox": [ + 222, + 590, + 394, + 702 + ], + "type": "image", + "image_path": "1e0482fca83e3a0c110ebe1d09086c29da16050b788765f76847abe61a9728e5.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 157, + 708, + 453, + 720 + ], + "lines": [ + { + "bbox": [ + 157, + 708, + 453, + 720 + ], + "spans": [ + { + "bbox": [ + 157, + 708, + 453, + 720 + ], + "type": "text", + "content": "Figure 4: Examples of attention mask compression for the NFA operator." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 86, + 72, + 523, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 72, + 523, + 172 + ], + "spans": [ + { + "bbox": [ + 86, + 72, + 523, + 172 + ], + "type": "text", + "content": "It is worth mentioning that Pangu Ultra uses a reset attention mask strategy to prevent self-attention between different documents within a sequence. This requires calculating the corresponding attention mask for every sequence, leading to significant memory and computational overhead. To mitigate the time and memory requirements of generating attention masks, the NFA operator employs a mask compression optimization. As shown in Figure 4, NFA utilizes a " + }, + { + "bbox": [ + 86, + 72, + 523, + 172 + ], + "type": "inline_equation", + "content": "2048 \\times 2048" + }, + { + "bbox": [ + 86, + 72, + 523, + 172 + ], + "type": "text", + "content": " causal mask as a template to construct the computational mask within the fusion attention operator. For every iteration, Pangu Ultra retrieves the actual sequence length based on the position of the end-of-document (eod) token, which is then provided as input to the NFA operator to accelerate the computation of self-attention. The detailed usage of NFA is provided in the Ascend documentation [9]." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 176, + 523, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 176, + 523, + 232 + ], + "spans": [ + { + "bbox": [ + 86, + 176, + 523, + 232 + ], + "type": "text", + "content": "Other Kernel Fusions for Efficiency In addition to MC2 and NPU-optimized fused attention, we also integrate a series of kernel fusion optimizations within key components such as RMSNorm [77], SwiGLU [60], and rotary positional embeddings (RoPE) [64], as well as critical processes including gradient accumulation and PP send/receive communications. These fusion operators are designed to reduce kernel launch and memory access overheads, while maintaining high numerical precision and enhancing overall training performance." + } + ] + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 88, + 277, + 187, + 357 + ], + "blocks": [ + { + "bbox": [ + 88, + 277, + 187, + 357 + ], + "lines": [ + { + "bbox": [ + 88, + 277, + 187, + 357 + ], + "spans": [ + { + "bbox": [ + 88, + 277, + 187, + 357 + ], + "type": "image", + "image_path": "16eacb740d6bf0b2784477cb2487f0ab1063e6a5012a50fc66bb0e95be1356a0.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 361, + 147, + 373 + ], + "lines": [ + { + "bbox": [ + 104, + 361, + 147, + 373 + ], + "spans": [ + { + "bbox": [ + 104, + 361, + 147, + 373 + ], + "type": "text", + "content": "(a) Original" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 198, + 277, + 299, + 357 + ], + "blocks": [ + { + "bbox": [ + 155, + 260, + 217, + 270 + ], + "lines": [ + { + "bbox": [ + 155, + 260, + 217, + 270 + ], + "spans": [ + { + "bbox": [ + 155, + 260, + 217, + 270 + ], + "type": "text", + "content": "Causal Masking" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 198, + 277, + 299, + 357 + ], + "lines": [ + { + "bbox": [ + 198, + 277, + 299, + 357 + ], + "spans": [ + { + "bbox": [ + 198, + 277, + 299, + 357 + ], + "type": "image", + "image_path": "44eff9e81d69a0bbd3268fc86564bba8580753fe6519428b8e8df3699bcc491e.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 216, + 361, + 265, + 373 + ], + "lines": [ + { + "bbox": [ + 216, + 361, + 265, + 373 + ], + "spans": [ + { + "bbox": [ + 216, + 361, + 265, + 373 + ], + "type": "text", + "content": "(b) Megatron" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 311, + 277, + 412, + 357 + ], + "blocks": [ + { + "bbox": [ + 365, + 260, + 468, + 269 + ], + "lines": [ + { + "bbox": [ + 365, + 260, + 468, + 269 + ], + "spans": [ + { + "bbox": [ + 365, + 260, + 468, + 269 + ], + "type": "text", + "content": "Reset of Attention Mask" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 311, + 277, + 412, + 357 + ], + "lines": [ + { + "bbox": [ + 311, + 277, + 412, + 357 + ], + "spans": [ + { + "bbox": [ + 311, + 277, + 412, + 357 + ], + "type": "image", + "image_path": "ffe59b514cbf23bd67320295314345dd201247ecf63ba855135b76d9846748f8.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 332, + 361, + 380, + 373 + ], + "lines": [ + { + "bbox": [ + 332, + 361, + 380, + 373 + ], + "spans": [ + { + "bbox": [ + 332, + 361, + 380, + 373 + ], + "type": "text", + "content": "(c) Megatron" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 422, + 278, + 519, + 358 + ], + "blocks": [ + { + "bbox": [ + 422, + 278, + 519, + 358 + ], + "lines": [ + { + "bbox": [ + 422, + 278, + 519, + 358 + ], + "spans": [ + { + "bbox": [ + 422, + 278, + 519, + 358 + ], + "type": "image", + "image_path": "8840be29e16484031a9e2441ba3bff2385b5b84697466d6f32a2ba9ddc05825e.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 450, + 361, + 481, + 373 + ], + "lines": [ + { + "bbox": [ + 450, + 361, + 481, + 373 + ], + "spans": [ + { + "bbox": [ + 450, + 361, + 481, + 373 + ], + "type": "text", + "content": "(d) Ours" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 121, + 384, + 487, + 396 + ], + "lines": [ + { + "bbox": [ + 121, + 384, + 487, + 396 + ], + "spans": [ + { + "bbox": [ + 121, + 384, + 487, + 396 + ], + "type": "text", + "content": "Figure 5: Examples of the mechanism of sub-sequence partitioning for context parallelism." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 445, + 292, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 445, + 292, + 456 + ], + "spans": [ + { + "bbox": [ + 86, + 445, + 292, + 456 + ], + "type": "text", + "content": "4.3.2 Optimization for Long Context Training" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 86, + 470, + 522, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 470, + 522, + 515 + ], + "spans": [ + { + "bbox": [ + 86, + 470, + 522, + 515 + ], + "type": "text", + "content": "Scaling long-context capabilities is becoming increasingly important for applications such as long document summarization and conversational AI. However, training on long sequences presents several challenges in terms of both time and memory complexity. To improve the efficiency of long-context training, we propose two key strategies, as outlined below." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "spans": [ + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "text", + "content": "Sub-Sequence Partitioning for Context Parallelism Context parallelism (CP) is an crucial approach for the training of very long sequences, that divides the input sequence into segments to reduce memory consumption [44, 33]. Yet, with causal masking, simply splitting the sequence into " + }, + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "inline_equation", + "content": "CP" + }, + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "text", + "content": " chunks results in a severely imbalanced workload for Ring Self-Attention (RSA) [44] (as shown in Figure 5(a)). Megatron-LM addresses this issue by splitting the sequence into " + }, + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "inline_equation", + "content": "2 \\times CP" + }, + { + "bbox": [ + 86, + 520, + 522, + 619 + ], + "type": "text", + "content": " chunks, where each rank receives chunks from both the top and bottom, thus balancing the workload within a CP group (Figure 5(b)) [7]. However, this method still results in an imbalanced workload when the attention mask is reset (Figure 5(c)). Therefore, in training with 128k-long contexts, we propose a load-balanced partitioning strategy for CP training, where each rank is responsible for computing two chunks within each subsequence (Figure 5(d))." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 86, + 623, + 522, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 623, + 522, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 623, + 522, + 723 + ], + "type": "text", + "content": "Fast Mask Generation and Data Reuse When scaling the training sequence of Pangu Ultra up to 128k, the generation of the attention mask or the calculation of the actual sequence length still incurs a non-negligible performance overhead. Additionally, in the training scenario with reset attention masks, each VPP stage is required to retrieve the corresponding mask or actual sequence length in every iteration, resulting in redundant computations and increased overhead. We optimize these problems by (1) using efficient NPU operators to compute the attention mask, instead of constructing it on the CPU, thus accelerating mask generation and eliminating the need for data transfer between the CPU and NPU, and (2) enabling cross-VPP stage mask sharing, where attention masks are generated by the first stage (VPP0) and shared across different VPP stages on the same rank, thereby avoiding redundant mask computations and memory cost." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 71, + 146, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 71, + 146, + 83 + ], + "spans": [ + { + "bbox": [ + 87, + 71, + 146, + 83 + ], + "type": "text", + "content": "5 Results" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 96, + 525, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 96, + 525, + 129 + ], + "spans": [ + { + "bbox": [ + 86, + 96, + 525, + 129 + ], + "type": "text", + "content": "In this section, we discuss the evaluation results of Pangu Ultra, including pre-training performance and posttraining outcomes. In addition, we provide comprehensive ablation studies that exam the model architecture and further discuss the observations of training Pangu Ultra." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 143, + 258, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 143, + 258, + 156 + ], + "spans": [ + { + "bbox": [ + 86, + 143, + 258, + 156 + ], + "type": "text", + "content": "5.1 Pre-Training Training Loss Curve" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 163, + 523, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 163, + 523, + 218 + ], + "spans": [ + { + "bbox": [ + 86, + 163, + 523, + 218 + ], + "type": "text", + "content": "Figure 6 shows the training loss curve of Pangu Ultra during the entire pre-training. Each segment in the loss curve corresponds to one training stage, as described in Section 3.1.3. The loss curves demonstrate consistent descending trends across all training stages. For the second interval, although the descent rate moderated due to a constant learning rate, the performance metrics continued to show steady improvement throughout this interval." + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 157, + 233, + 453, + 441 + ], + "blocks": [ + { + "bbox": [ + 157, + 233, + 453, + 441 + ], + "lines": [ + { + "bbox": [ + 157, + 233, + 453, + 441 + ], + "spans": [ + { + "bbox": [ + 157, + 233, + 453, + 441 + ], + "type": "image", + "image_path": "4cef587854556619f00415671ee67e04874b43df92f0b322a5c7d7d9f318d9e9.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 148, + 451, + 461, + 464 + ], + "lines": [ + { + "bbox": [ + 148, + 451, + 461, + 464 + ], + "spans": [ + { + "bbox": [ + 148, + 451, + 461, + 464 + ], + "type": "text", + "content": "Figure 6: The training loss curve of Pangu Ultra during the pre-training stage." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 86, + 491, + 523, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 491, + 523, + 536 + ], + "spans": [ + { + "bbox": [ + 86, + 491, + 523, + 536 + ], + "type": "text", + "content": "Zero loss spike As shown in Figure 6, no loss spikes occur throughout the entire pre-training process. While such spikes are common in LLM training [66], the absence of them here underscores the importance of our depth-scaled sandwich norm and TinyInit in ensuring stable training. The negative effect of loss spike to the model performance will be further elaborated in Section 5.4.1." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 87, + 548, + 194, + 561 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 548, + 194, + 561 + ], + "spans": [ + { + "bbox": [ + 87, + 548, + 194, + 561 + ], + "type": "text", + "content": "5.2 Pre-Training Stage" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 86, + 570, + 523, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 570, + 523, + 615 + ], + "spans": [ + { + "bbox": [ + 86, + 570, + 523, + 615 + ], + "type": "text", + "content": "Benchmarks We evaluate Pangu Ultra base model across multiple domains using open-source benchmarks, including language understanding, question answering, code generation, and math problem solving. The evaluation mainly uses English and Chinese test sets, with some additional multilingual benchmarks for broader coverage." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 86, + 624, + 523, + 723 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 86, + 624, + 523, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 624, + 523, + 658 + ], + "spans": [ + { + "bbox": [ + 86, + 624, + 523, + 658 + ], + "type": "text", + "content": "- Language understanding: We employ Hellaswag [76] and Winogrande for contextual reasoning tasks, DROP [21], RACE [42], and ARC [15] series for comprehensive reading comprehension evaluation, along with PIQA [12], Natural Questions [41] and TriviaQA [37] to assess knowledge retrieval." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 86, + 662, + 523, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 662, + 523, + 696 + ], + "spans": [ + { + "bbox": [ + 86, + 662, + 523, + 696 + ], + "type": "text", + "content": "- Question answering: The assessment includes C-Eval [31] for Chinese knowledge, MMLU [27] and its advanced variant MMLU-Pro [70] for English domain knowledge, supplemented by BigBenchHard [65] to evaluate creative problem-solving" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 700, + 523, + 723 + ], + "type": "text", + "content": "- Code generation and understanding: We utilize HumanEval [13] and MBPP [10] for standard code generation tasks, while CruxEval [26] for code understanding and reasoning." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "spans": [ + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "content": "- Mathematical Reasoning: We measure skills with " + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "inline_equation", + "content": "CMath" + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "content": " [71] and " + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "inline_equation", + "content": "GSM8K" + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "content": " [16] for fundamental arithmetic and simple problems, " + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "inline_equation", + "content": "MATH" + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "content": " [28] for advanced mathematical reasoning, and " + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "inline_equation", + "content": "MGSM" + }, + { + "bbox": [ + 87, + 72, + 523, + 106 + ], + "type": "text", + "content": " [61] for multilingual math problem solving." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 120, + 523, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 120, + 523, + 175 + ], + "spans": [ + { + "bbox": [ + 86, + 120, + 523, + 175 + ], + "type": "text", + "content": "Baselines & Comparison Settings We compare Pangu Ultra against several strong baselines covers both dense models (Qwen2.5-72B, Llama-405B) and MoE architectures (DeepSeek-V3). For base models, the majority of our evaluations employ few-shot inputs, with a minority using zero-shot prompts. We evaluate most benchmarks with gold answers through exact matching, while employing execution-based verification for code generation tasks." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 190, + 523, + 235 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 190, + 523, + 235 + ], + "spans": [ + { + "bbox": [ + 86, + 190, + 523, + 235 + ], + "type": "text", + "content": "Evaluation Results In Table 3, we compare the pre-trained base model of Pangu Ultra with other leading models. Overall, Pangu Ultra achieves state-of-the-art performance on most general English benchmarks and all Chinese benchmarks. While it trails DeepSeek V3 on code and math-related tasks, it performs competitively on these domains." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 239, + 523, + 296 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 239, + 523, + 296 + ], + "spans": [ + { + "bbox": [ + 86, + 239, + 523, + 296 + ], + "type": "text", + "content": "A closer examination reveals that Pangu Ultra excels on Chinese benchmarks, surpassing both Qwen 2.5 72B and DeepSeek V3, the current best-performing Chinese model. In addition, when compared to Llama 3.1 405B, Pangu Ultra achieves better scores on most of the challenging benchmarks, while utilizing only about " + }, + { + "bbox": [ + 86, + 239, + 523, + 296 + ], + "type": "inline_equation", + "content": "29\\%" + }, + { + "bbox": [ + 86, + 239, + 523, + 296 + ], + "type": "text", + "content": " of the training FLOPs required by Llama 405B. These results suggest the effectiveness of our model architecture and the high quality of our training data." + } + ] + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 127, + 350, + 480, + 712 + ], + "blocks": [ + { + "bbox": [ + 86, + 315, + 523, + 348 + ], + "lines": [ + { + "bbox": [ + 86, + 315, + 523, + 348 + ], + "spans": [ + { + "bbox": [ + 86, + 315, + 523, + 348 + ], + "type": "text", + "content": "Table 3: Comparison of Pangu Ultra and other representative models across a diverse set of benchmarks for evaluating language, coding and mathematical skills. Bold values represent the best results in each line, and underlined values represent Pangu Ultra is the best among dense models." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 127, + 350, + 480, + 712 + ], + "lines": [ + { + "bbox": [ + 127, + 350, + 480, + 712 + ], + "spans": [ + { + "bbox": [ + 127, + 350, + 480, + 712 + ], + "type": "table", + "html": "
Benchmark (Metric)# ShotsQwen2.5 72B BaseLlama-3.1 405B BaseDeepSeek V3 BasePangu Ultra Base
Architecture-DenseDenseMoEDense
# Activated Params-72B405B37B135B
# Total Params-72B405B671B135B
EnglishBBH (EM)3-shot79.882.987.579.1
MMLU (EM)5-shot85.084.487.185.4
MMLU-Pro (EM)5-shot58.352.864.463.1
DROP (F1)3-shot80.686.089.061.0
ARC-Easy (EM)25-shot98.498.498.9100.0
ARC-Challenge (EM)25-shot94.595.395.397.0
HellaSwag (EM)10-shot84.889.288.999.0
PIQA (EM)0-shot82.685.984.798.0
WinoGrande (EM)5-shot82.385.284.991.0
RACE-Middle (EM)5-shot68.174.267.197.0
RACE-High (EM)5-shot50.356.851.397.0
TriviaQA (EM)5-shot71.982.782.990.5
NaturalQuestions (EM)5-shot33.241.540.052.7
AGIEval (EM)0-shot75.860.679.680.4
CodeHumanEval (Pass@1)0-shot53.054.965.281.1
MBPP (Pass@1)3-shot72.668.475.472
CRUXEval-I (EM)2-shot59.158.567.361.8
CRUXEval-O (EM)2-shot59.959.969.861.5
MathGSM8K (EM)8-shot88.383.589.389.3
MATH (EM)4-shot54.449.061.662.5
MGSM (EM)8-shot76.269.979.875.1
CMath (EM)3-shot84.577.390.778.2
ChineseCLUEWSC (EM)5-shot82.583.082.795.0
C-Eval (EM)5-shot89.272.590.190.3
CMMLU (EM)5-shot89.573.788.891.7
CMRC (EM)1-shot75.876.076.386.0
C3 (EM)0-shot76.779.778.699.0
CCPM (EM)0-shot88.578.692.093.0
", + "image_path": "688e6f49bbae37cf3b66fea8df45d115891068481814963c9c72a37797b11531.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 72, + 284, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 72, + 284, + 85 + ], + "spans": [ + { + "bbox": [ + 87, + 72, + 284, + 85 + ], + "type": "text", + "content": "5.3 Post-Training and Reasoning Capability" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 86, + 92, + 523, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 92, + 523, + 117 + ], + "spans": [ + { + "bbox": [ + 86, + 92, + 523, + 117 + ], + "type": "text", + "content": "Benchmarks We conduct a comprehensive evaluation of the Pangu Ultra's capabilities over reasoning and non-reasoning tasks:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 125, + 524, + 185 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 86, + 125, + 524, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 125, + 524, + 159 + ], + "spans": [ + { + "bbox": [ + 86, + 125, + 524, + 159 + ], + "type": "text", + "content": "- Sophisticated reasoning tasks encompass three specialized subcategories: mathematical competence measured by AIME 2024 [49] and MATH-500, Coding competition benchmarks LiveCodeBench [34] and scientific reasoning task GPQA Diamond [59];" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 162, + 523, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 162, + 523, + 185 + ], + "spans": [ + { + "bbox": [ + 86, + 162, + 523, + 185 + ], + "type": "text", + "content": "- General language comprehension and reasoning capabilities, represented by MMLU-Pro [24], Arena Hard [45]." + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 86, + 196, + 523, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 196, + 523, + 241 + ], + "spans": [ + { + "bbox": [ + 86, + 196, + 523, + 241 + ], + "type": "text", + "content": "Baselines & Comparison Settings We compare Pangu Ultra against strong baselines including GPT-4o0513, reasoning models DeepSeek-R1, Hunyuan-T1 and large dense models, Qwen2.5-72B-Instruct and Mistral-Large 2. We use Pass@1 averaged over multiple independent runs as the evaluation metric to assess the performance." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 86, + 252, + 522, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 252, + 522, + 298 + ], + "spans": [ + { + "bbox": [ + 86, + 252, + 522, + 298 + ], + "type": "text", + "content": "Evaluation Results In Table 4, we compare the evaluation results of Pangu Ultra with other baseline models. Pangu Ultra achieves state-of-the-art performance on the reasoning benchmarks including AIME 2024, MATH-500, GPQA and LiveCodeBench, while maintaining strong capabilities in general language comprehension tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 86, + 301, + 523, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 301, + 523, + 348 + ], + "spans": [ + { + "bbox": [ + 86, + 301, + 523, + 348 + ], + "type": "text", + "content": "When compared to dense LLMs (Qwen and Mistral-Large 2), Pangu Ultra shows particularly significant advantages in reasoning tasks. This superior performance stems from the 0.8T reasoning-focused data used in pre-training (Section 3.1.3). The reasoning-enhanced base model substantially benefits subsequent post-training phases." + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 89, + 387, + 520, + 495 + ], + "blocks": [ + { + "bbox": [ + 86, + 363, + 522, + 386 + ], + "lines": [ + { + "bbox": [ + 86, + 363, + 522, + 386 + ], + "spans": [ + { + "bbox": [ + 86, + 363, + 522, + 386 + ], + "type": "text", + "content": "Table 4: Comparison of Pangu Ultra models and other representative models across benchmarks. " + }, + { + "bbox": [ + 86, + 363, + 522, + 386 + ], + "type": "inline_equation", + "content": "\\dagger" + }, + { + "bbox": [ + 86, + 363, + 522, + 386 + ], + "type": "text", + "content": " indicates results from Artificial Analysis [1]." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 89, + 387, + 520, + 495 + ], + "lines": [ + { + "bbox": [ + 89, + 387, + 520, + 495 + ], + "spans": [ + { + "bbox": [ + 89, + 387, + 520, + 495 + ], + "type": "table", + "html": "
ModelAIME 2024MATH-500GPQA DiamondLiveCode BenchArenaHardMMLU-pro
GPT-4o-05139.374.649.932.980.472.6
Qwen2.5-72B16.083.14927.681.272.0
Mistral-Large 2†11.073.648.629.3-69.7
Hunyuan-T179.896.269.364.991.987.2
DeepSeek-R179.897.371.565.992.384.0
Pangu Ultra80.897.474.266.591.584.4
", + "image_path": "7919ae6ab8d9a21337ecd5d2e2e396908f906d778d62011a971c810fa5816360.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 87, + 515, + 185, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 515, + 185, + 528 + ], + "spans": [ + { + "bbox": [ + 87, + 515, + 185, + 528 + ], + "type": "text", + "content": "5.4 Ablation Studies" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 536, + 523, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 536, + 523, + 559 + ], + "spans": [ + { + "bbox": [ + 86, + 536, + 523, + 559 + ], + "type": "text", + "content": "This section presents additional ablation studies of the model architecture and analyzes key training behaviors to facilitate a deeper understanding and discussion of dense LLM training." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 87, + 570, + 247, + 582 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 570, + 247, + 582 + ], + "spans": [ + { + "bbox": [ + 87, + 570, + 247, + 582 + ], + "type": "text", + "content": "5.4.1 Depth-scaled Sandwich-norm" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 86, + 589, + 522, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 589, + 522, + 624 + ], + "spans": [ + { + "bbox": [ + 86, + 589, + 522, + 624 + ], + "type": "text", + "content": "We conducted experiments to validate the effectiveness of depth-scaled sandwich norm compared to pre-norm architectures. Using a dense Transformer model with 13 billion parameters trained on 300 billion tokens with identical hyperparameters for both configurations, we observe significant improvements." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 86, + 627, + 523, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 627, + 523, + 694 + ], + "spans": [ + { + "bbox": [ + 86, + 627, + 523, + 694 + ], + "type": "text", + "content": "Figure 7 shows the depth-scaled sandwich-norm architecture stabilizes gradient norms and effectively eliminates loss spikes, leading to faster training convergence. We evaluated performance on two composite benchmarks: EN basic, consisting of multiple English benchmarks, and ZH basic, representing Chinese benchmarks. Additional evaluation using LAMBADA [54] (English) and WPLC [23] (Chinese) next-token prediction tasks confirmed the advantage of applying depth-scaled sandwich-norm. The results clearly suggest that preventing loss spikes during pre-training is crucial for optimal model performance." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 86, + 697, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 697, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 697, + 523, + 723 + ], + "type": "text", + "content": "To further ablate the effect of our depth-scaled factor in RMSNorm initialization, we compare with the plain sandwich-norm that does not have the " + }, + { + "bbox": [ + 86, + 697, + 523, + 723 + ], + "type": "inline_equation", + "content": "\\sqrt{L}" + }, + { + "bbox": [ + 86, + 697, + 523, + 723 + ], + "type": "text", + "content": " scaling factor in Eq. (1). Here, we use a proxy model containing 1.6" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 93, + 81, + 294, + 210 + ], + "blocks": [ + { + "bbox": [ + 93, + 81, + 294, + 210 + ], + "lines": [ + { + "bbox": [ + 93, + 81, + 294, + 210 + ], + "spans": [ + { + "bbox": [ + 93, + 81, + 294, + 210 + ], + "type": "image", + "image_path": "a7350ff1b1e11bf1265c0e1d4a8e0cc37fad10ba9fa99b46a41d65427aa9f37d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 178, + 216, + 209, + 226 + ], + "lines": [ + { + "bbox": [ + 178, + 216, + 209, + 226 + ], + "spans": [ + { + "bbox": [ + 178, + 216, + 209, + 226 + ], + "type": "text", + "content": "(a) Loss" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 312, + 81, + 518, + 211 + ], + "blocks": [ + { + "bbox": [ + 312, + 81, + 518, + 211 + ], + "lines": [ + { + "bbox": [ + 312, + 81, + 518, + 211 + ], + "spans": [ + { + "bbox": [ + 312, + 81, + 518, + 211 + ], + "type": "image", + "image_path": "f452483736d20db84cf41920e9224133306f09b056f6508e7ab535b3be175ddb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 381, + 216, + 449, + 226 + ], + "lines": [ + { + "bbox": [ + 381, + 216, + 449, + 226 + ], + "spans": [ + { + "bbox": [ + 381, + 216, + 449, + 226 + ], + "type": "text", + "content": "(b) Gradient norm" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 86, + 232, + 525, + 266 + ], + "lines": [ + { + "bbox": [ + 86, + 232, + 525, + 266 + ], + "spans": [ + { + "bbox": [ + 86, + 232, + 525, + 266 + ], + "type": "text", + "content": "Figure 7: Pre-training loss and gradient norm for a 13B model using Pre-LN and Depth-Scaled Sandwich-Norm (DSSN). The curves with Pre-LN has significant spikes, which harm the trained model, while the curves of DSSN are much smoother." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 111, + 296, + 499, + 339 + ], + "blocks": [ + { + "bbox": [ + 129, + 284, + 479, + 295 + ], + "lines": [ + { + "bbox": [ + 129, + 284, + 479, + 295 + ], + "spans": [ + { + "bbox": [ + 129, + 284, + 479, + 295 + ], + "type": "text", + "content": "Table 5: Performance comparison between Pre-LN and Depth-scaled Sandwich-Norm." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 111, + 296, + 499, + 339 + ], + "lines": [ + { + "bbox": [ + 111, + 296, + 499, + 339 + ], + "spans": [ + { + "bbox": [ + 111, + 296, + 499, + 339 + ], + "type": "table", + "html": "
ModelTokens (B)EN basicZH basicLAMBADAWPLC
Pre-LN3000.420.520.6750.194
Depth-scaled sandwich-norm3000.450.540.6930.224
", + "image_path": "bdbab18d054876503d9911625ccd93b413947f5c0cf2e0dc1198f3ecd08db00e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 86, + 357, + 523, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 357, + 523, + 402 + ], + "spans": [ + { + "bbox": [ + 86, + 357, + 523, + 402 + ], + "type": "text", + "content": "billion parameters and 94 layers, which has the same depth with Pangu Ultra. By using this proxy model, we examine the effectiveness of sandwich-norm on training very deep Transformers. In Figure 8, we can observe some loss spikes with the plain sandwich-norm, but our depth-scaled sandwich-norm can be trained smoothly, and attains lower loss." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 178, + 414, + 434, + 582 + ], + "blocks": [ + { + "bbox": [ + 178, + 414, + 434, + 582 + ], + "lines": [ + { + "bbox": [ + 178, + 414, + 434, + 582 + ], + "spans": [ + { + "bbox": [ + 178, + 414, + 434, + 582 + ], + "type": "image", + "image_path": "9e86702bb026850de11bf3b69527034295140cde348309c4f15a0f509b0108b0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 590, + 522, + 615 + ], + "lines": [ + { + "bbox": [ + 86, + 590, + 522, + 615 + ], + "spans": [ + { + "bbox": [ + 86, + 590, + 522, + 615 + ], + "type": "text", + "content": "Figure 8: Pre-training loss for a 94-layer 1.6B model using original and depth-scaled sandwich-norm. The original sandwich-norm still suffers loss spikes during training." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 87, + 633, + 197, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 633, + 197, + 645 + ], + "spans": [ + { + "bbox": [ + 87, + 633, + 197, + 645 + ], + "type": "text", + "content": "5.4.2 Tiny Initialization" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "text", + "content": "We conduct experiments to study the effectiveness of TinyInit proposed in Section 2.2. After being trained on 102 billion tokens, Pangu Ultra initialized with TinyInit strategy, with standard deviation " + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "inline_equation", + "content": "\\sqrt{\\frac{1}{2dL}}" + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "text", + "content": ", performs significantly better than the baseline model that utilizes traditional initialization, whose standard deviations are " + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "inline_equation", + "content": "\\sqrt{\\frac{2}{5d}}" + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "inline_equation", + "content": "\\sqrt{\\frac{2}{5dL}}" + }, + { + "bbox": [ + 86, + 652, + 522, + 723 + ], + "type": "text", + "content": ". The results are shown in Table 6. BIG-bench (aug) is a test set developed internally through data augmentation of the original BIG-bench, designed to mitigate the impact of test set leakage." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 88, + 89, + 532, + 133 + ], + "blocks": [ + { + "bbox": [ + 154, + 77, + 455, + 88 + ], + "lines": [ + { + "bbox": [ + 154, + 77, + 455, + 88 + ], + "spans": [ + { + "bbox": [ + 154, + 77, + 455, + 88 + ], + "type": "text", + "content": "Table 6: Performance comparison of traditional initialization and TinyInit." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 88, + 89, + 532, + 133 + ], + "lines": [ + { + "bbox": [ + 88, + 89, + 532, + 133 + ], + "spans": [ + { + "bbox": [ + 88, + 89, + 532, + 133 + ], + "type": "table", + "html": "
ModelTokens (B)EN basicZH basicLAMBADAWPLCC-EvalMMLUBIG-bench (aug)
Baseline1020.4440.5380.6940.2290.4760.4730.357
TinyInit1020.4560.5370.7270.2570.5240.5020.384
", + "image_path": "6a81bc754358c61ca73a33d2cb633c9cb433eeb8b601d84ad3ea2a99f5e50d84.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 86, + 151, + 251, + 163 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 151, + 251, + 163 + ], + "spans": [ + { + "bbox": [ + 86, + 151, + 251, + 163 + ], + "type": "text", + "content": "5.4.3 Layer Statistics of Pangu Ultra" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 86, + 170, + 523, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 170, + 523, + 248 + ], + "spans": [ + { + "bbox": [ + 86, + 170, + 523, + 248 + ], + "type": "text", + "content": "Stable activation scale Figure 9 presents the activation patterns of attention and FFN modules across Transformer layers, showing the mean, standard deviation, and top-1 activation values. The activation distributions demonstrate stability, with standard deviations maintaining consistent scales throughout pretraining while preserving a clear layer-wise pattern. Our analysis reveals the presence of \"super activations\", whose magnitude reaches " + }, + { + "bbox": [ + 86, + 170, + 523, + 248 + ], + "type": "inline_equation", + "content": "10^{3}" + }, + { + "bbox": [ + 86, + 170, + 523, + 248 + ], + "type": "text", + "content": " magnitude in shallow layers, a phenomenon consistent with findings in the Llama model [75]. Notably, Figure 9 illustrates that these top-1 activation values progressively decrease with layer depth, indicating that their influence becomes relatively small on the final output." + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 87, + 256, + 200, + 362 + ], + "blocks": [ + { + "bbox": [ + 87, + 256, + 200, + 362 + ], + "lines": [ + { + "bbox": [ + 87, + 256, + 200, + 362 + ], + "spans": [ + { + "bbox": [ + 87, + 256, + 200, + 362 + ], + "type": "image", + "image_path": "744bdafe62527ab3d0b64dfa4b6e24a915598b7b39cced4aa92fc003fa88a964.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 201, + 256, + 306, + 362 + ], + "blocks": [ + { + "bbox": [ + 201, + 256, + 306, + 362 + ], + "lines": [ + { + "bbox": [ + 201, + 256, + 306, + 362 + ], + "spans": [ + { + "bbox": [ + 201, + 256, + 306, + 362 + ], + "type": "image", + "image_path": "d5da125dd85b2e8c82176eac66539b38d92b199714d6f6de136b35734368f818.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 306, + 256, + 414, + 362 + ], + "blocks": [ + { + "bbox": [ + 306, + 256, + 414, + 362 + ], + "lines": [ + { + "bbox": [ + 306, + 256, + 414, + 362 + ], + "spans": [ + { + "bbox": [ + 306, + 256, + 414, + 362 + ], + "type": "image", + "image_path": "e81b7458f71ddbf3450f01740e5c85949a2351e6341d8edc2b5e85b601b68d53.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 414, + 256, + 522, + 362 + ], + "blocks": [ + { + "bbox": [ + 414, + 256, + 522, + 362 + ], + "lines": [ + { + "bbox": [ + 414, + 256, + 522, + 362 + ], + "spans": [ + { + "bbox": [ + 414, + 256, + 522, + 362 + ], + "type": "image", + "image_path": "dede2658653408b30951dd37efe53c0b13200e71b61574d61c0bcf26b98a01a6.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 86, + 362, + 200, + 468 + ], + "blocks": [ + { + "bbox": [ + 86, + 362, + 200, + 468 + ], + "lines": [ + { + "bbox": [ + 86, + 362, + 200, + 468 + ], + "spans": [ + { + "bbox": [ + 86, + 362, + 200, + 468 + ], + "type": "image", + "image_path": "5038d458e0828307fbdc502f8a6a1c6d5b0596906f56bf8fb4b3bbfdacab24d8.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 116, + 471, + 190, + 481 + ], + "lines": [ + { + "bbox": [ + 116, + 471, + 190, + 481 + ], + "spans": [ + { + "bbox": [ + 116, + 471, + 190, + 481 + ], + "type": "text", + "content": "(a) Down projection" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 200, + 362, + 307, + 468 + ], + "blocks": [ + { + "bbox": [ + 200, + 362, + 307, + 468 + ], + "lines": [ + { + "bbox": [ + 200, + 362, + 307, + 468 + ], + "spans": [ + { + "bbox": [ + 200, + 362, + 307, + 468 + ], + "type": "image", + "image_path": "c590e32f81ccd10ab1fa17b21f99304a0d063c9c1b7330cdccb19a9acce91b00.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 214, + 471, + 298, + 481 + ], + "lines": [ + { + "bbox": [ + 214, + 471, + 298, + 481 + ], + "spans": [ + { + "bbox": [ + 214, + 471, + 298, + 481 + ], + "type": "text", + "content": "(b) Up & Gate projection" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 307, + 362, + 414, + 468 + ], + "blocks": [ + { + "bbox": [ + 307, + 362, + 414, + 468 + ], + "lines": [ + { + "bbox": [ + 307, + 362, + 414, + 468 + ], + "spans": [ + { + "bbox": [ + 307, + 362, + 414, + 468 + ], + "type": "image", + "image_path": "bbbeb145f0934dc8137745378248963f7a90023907c1fb7ea1f630c66fff4a9c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 487, + 522, + 542 + ], + "lines": [ + { + "bbox": [ + 86, + 487, + 522, + 542 + ], + "spans": [ + { + "bbox": [ + 86, + 487, + 522, + 542 + ], + "type": "text", + "content": "Figure 9: Activation of attention and FFN modules. Mean, standard deviation, and top-1 value of activations are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. The \"Std\" row shows the stable activation scale across layers. The \"Top 1\" row shows the existence of the \"super activations\" in down projection and attention output projection, with magnitudes falling within a reasonable range and comparable to those observed in the LLaMA model [75]." + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 416, + 362, + 522, + 468 + ], + "blocks": [ + { + "bbox": [ + 315, + 471, + 416, + 481 + ], + "lines": [ + { + "bbox": [ + 315, + 471, + 416, + 481 + ], + "spans": [ + { + "bbox": [ + 315, + 471, + 416, + 481 + ], + "type": "text", + "content": "(c) Attention output projection" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 416, + 362, + 522, + 468 + ], + "lines": [ + { + "bbox": [ + 416, + 362, + 522, + 468 + ], + "spans": [ + { + "bbox": [ + 416, + 362, + 522, + 468 + ], + "type": "image", + "image_path": "a475262120a9476b96557c616ae8cbec27176f471f7ccb1961523ba8495df953.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 424, + 471, + 522, + 481 + ], + "lines": [ + { + "bbox": [ + 424, + 471, + 522, + 481 + ], + "spans": [ + { + "bbox": [ + 424, + 471, + 522, + 481 + ], + "type": "text", + "content": "(d) Attention QKV projection" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "spans": [ + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": "Layer-wise patterns of depth-scaled sandwich norm. Figure 10 presents the distribution of scaling parameters " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " across all sandwich-norm layers, revealing several key observations: All four LayerNorm " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " parameters exhibit decreasing mean/standard deviation during training, consistent with weight decay effects. Post-norm " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " values show layer-dependent patterns: The standard deviation of post-norm " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " increases substantially with layer depth. Pre-norm " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " maintains relatively constant standard deviation across layers. This pattern suggests an intriguing model behavior: shallow layers rely primarily on residual connections, while deeper layers progressively emphasize transformer layer outputs as the scaling factor " + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 560, + 522, + 639 + ], + "type": "text", + "content": " grows in magnitude." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 86, + 654, + 165, + 665 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 654, + 165, + 665 + ], + "spans": [ + { + "bbox": [ + 86, + 654, + 165, + 665 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 86, + 677, + 523, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 677, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 86, + 677, + 523, + 723 + ], + "type": "text", + "content": "We present Pangu Ultra, a dense language foundation model with 135 billion parameters trained on Ascend NPUs. To address challenges in training large-scale deep models, we propose depth-scaled sandwich-norm, enabling Pangu Ultra to achieve remarkable training stability without significant loss spikes. After being pre-trained on 13.2 trillion tokens and long context extension on 8,192 Ascend NPUs, our model further" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 86, + 68, + 200, + 175 + ], + "blocks": [ + { + "bbox": [ + 86, + 68, + 200, + 175 + ], + "lines": [ + { + "bbox": [ + 86, + 68, + 200, + 175 + ], + "spans": [ + { + "bbox": [ + 86, + 68, + 200, + 175 + ], + "type": "image", + "image_path": "13abe013a5a8aed380f4b8c00ef395e0970882df83898c64d6c7954aba6b2a0a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 201, + 68, + 309, + 175 + ], + "blocks": [ + { + "bbox": [ + 201, + 68, + 309, + 175 + ], + "lines": [ + { + "bbox": [ + 201, + 68, + 309, + 175 + ], + "spans": [ + { + "bbox": [ + 201, + 68, + 309, + 175 + ], + "type": "image", + "image_path": "b1ab1fd9180116b7ea72fe004b14fcb2125623cbd6b91a13aba18aee54ad7139.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 309, + 68, + 415, + 175 + ], + "blocks": [ + { + "bbox": [ + 309, + 68, + 415, + 175 + ], + "lines": [ + { + "bbox": [ + 309, + 68, + 415, + 175 + ], + "spans": [ + { + "bbox": [ + 309, + 68, + 415, + 175 + ], + "type": "image", + "image_path": "36d5b1a0863e300a108ae4c864d93d9defdec43b21ecd3aa16ef83845a45f05a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 415, + 68, + 524, + 175 + ], + "blocks": [ + { + "bbox": [ + 415, + 68, + 524, + 175 + ], + "lines": [ + { + "bbox": [ + 415, + 68, + 524, + 175 + ], + "spans": [ + { + "bbox": [ + 415, + 68, + 524, + 175 + ], + "type": "image", + "image_path": "03d2d47939ba7873c709b29c65584a4595694ece196a9c0981e2369b26fccc7c.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 88, + 176, + 200, + 281 + ], + "blocks": [ + { + "bbox": [ + 88, + 176, + 200, + 281 + ], + "lines": [ + { + "bbox": [ + 88, + 176, + 200, + 281 + ], + "spans": [ + { + "bbox": [ + 88, + 176, + 200, + 281 + ], + "type": "image", + "image_path": "54f645ba18cba7ab9dfa94079f9f6f5e8ce6ba6d7fa82fe956387c12e75a53e1.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 97, + 284, + 200, + 294 + ], + "lines": [ + { + "bbox": [ + 97, + 284, + 200, + 294 + ], + "spans": [ + { + "bbox": [ + 97, + 284, + 200, + 294 + ], + "type": "text", + "content": "(a) Post-norm after attention" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 201, + 176, + 309, + 281 + ], + "blocks": [ + { + "bbox": [ + 201, + 176, + 309, + 281 + ], + "lines": [ + { + "bbox": [ + 201, + 176, + 309, + 281 + ], + "spans": [ + { + "bbox": [ + 201, + 176, + 309, + 281 + ], + "type": "image", + "image_path": "fb0af1b40e94a5a4354f81884cac4230ddc3ea2a9f6da59e0106e6e238c74570.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 219, + 284, + 300, + 294 + ], + "lines": [ + { + "bbox": [ + 219, + 284, + 300, + 294 + ], + "spans": [ + { + "bbox": [ + 219, + 284, + 300, + 294 + ], + "type": "text", + "content": "(b) Post-norm after FFN" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 309, + 175, + 416, + 281 + ], + "blocks": [ + { + "bbox": [ + 309, + 175, + 416, + 281 + ], + "lines": [ + { + "bbox": [ + 309, + 175, + 416, + 281 + ], + "spans": [ + { + "bbox": [ + 309, + 175, + 416, + 281 + ], + "type": "image", + "image_path": "cddcf8612c33bcdf7d3401bcd56bac43ffbfc1e5fc34084f14f871f4382925f2.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "lines": [ + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "spans": [ + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "text", + "content": "Figure 10: Distribution of sandwich-norm's " + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "text", + "content": " parameter. Mean and standard deviation are included. Each line represents different training tokens from 1T, 2T, 4T to 7T. There is a clear layer-wise pattern of the two post-norms: the mean and std value of " + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "text", + "content": " increase with depth. Larger post-norm " + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 86, + 300, + 523, + 346 + ], + "type": "text", + "content": " indicates deeper layers emphasize more on transformer outputs instead of residual connections." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 413, + 176, + 524, + 281 + ], + "blocks": [ + { + "bbox": [ + 315, + 284, + 416, + 294 + ], + "lines": [ + { + "bbox": [ + 315, + 284, + 416, + 294 + ], + "spans": [ + { + "bbox": [ + 315, + 284, + 416, + 294 + ], + "type": "text", + "content": "(c) Post-norm before attention" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 413, + 176, + 524, + 281 + ], + "lines": [ + { + "bbox": [ + 413, + 176, + 524, + 281 + ], + "spans": [ + { + "bbox": [ + 413, + 176, + 524, + 281 + ], + "type": "image", + "image_path": "a1a9c9d0433ecbc43e62a5cfd8cbc6b7f7f773f05c274f2ecca5b21c1c92acef.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 432, + 284, + 520, + 294 + ], + "lines": [ + { + "bbox": [ + 432, + 284, + 520, + 294 + ], + "spans": [ + { + "bbox": [ + 432, + 284, + 520, + 294 + ], + "type": "text", + "content": "(d) Post-norm before FFN" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 86, + 366, + 522, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 86, + 366, + 522, + 433 + ], + "spans": [ + { + "bbox": [ + 86, + 366, + 522, + 433 + ], + "type": "text", + "content": "enhances its reasoning capabilities through Supervised Fine-Tuning and Reinforcement Learning. Extensive experiments lead to the observation that Pangu Ultra not only surpasses state-of-the-art dense LLMs like Llama 405B and Mistral Large 2 but also delivers competitive performance against larger sparse models such as DeepSeek-R1. These results highlight the efficacy of our architectural and systemic optimizations, paving the way for future advancements in scalable and efficient LLM training. In addition, our experience demonstrates that the Ascend NPUs are capable of training dense models with hundreds of billions of parameters." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 87, + 449, + 146, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 449, + 146, + 460 + ], + "spans": [ + { + "bbox": [ + 87, + 449, + 146, + 460 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 88, + 468, + 523, + 723 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 93, + 468, + 348, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 468, + 348, + 481 + ], + "spans": [ + { + "bbox": [ + 93, + 468, + 348, + 481 + ], + "type": "text", + "content": "[1] Artificial analysis. https://artificialanalysis.ai/." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 93, + 484, + 523, + 506 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 484, + 523, + 506 + ], + "spans": [ + { + "bbox": [ + 93, + 484, + 523, + 506 + ], + "type": "text", + "content": "[2] Ascend mc2. https://citee.com/qingfenxiaochong/MindSpeed/blob/master/docs/features/mc2.md." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 93, + 510, + 473, + 523 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 510, + 473, + 523 + ], + "spans": [ + { + "bbox": [ + 93, + 510, + 473, + 523 + ], + "type": "text", + "content": "[3] Ascend mc2. https://www.hiasmend.com/developer/techArticles/20240613-1." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 93, + 525, + 410, + 537 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 525, + 410, + 537 + ], + "spans": [ + { + "bbox": [ + 93, + 525, + 410, + 537 + ], + "type": "text", + "content": "[4] Flash attention. https://github.com/Dao-AILab/flash-attention." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 93, + 540, + 523, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 540, + 523, + 563 + ], + "spans": [ + { + "bbox": [ + 93, + 540, + 523, + 563 + ], + "type": "text", + "content": "[5] Huawei atlas 800t a2. https://e.huawei.com/cn/products/computing/ascend/ atlas-800t-a2." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 93, + 567, + 523, + 600 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 567, + 523, + 600 + ], + "spans": [ + { + "bbox": [ + 93, + 567, + 523, + 600 + ], + "type": "text", + "content": "[6] Huawei atlas 800t a2 technical specifications. https://support.huawei.com/enterprise/en/doc/EDOC1100349804/2bf2c017/technical-specifications?idPath=23710424|251366513|22892968|252309113|254184887." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 93, + 605, + 367, + 617 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 605, + 367, + 617 + ], + "spans": [ + { + "bbox": [ + 93, + 605, + 367, + 617 + ], + "type": "text", + "content": "[7] Megatron-lm. https://github.com/NVIDIA/Megatron-LM." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 93, + 620, + 342, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 620, + 342, + 632 + ], + "spans": [ + { + "bbox": [ + 93, + 620, + 342, + 632 + ], + "type": "text", + "content": "[8] Mindspeed. https://citee.com/ascend/MindSpeed." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 93, + 636, + 523, + 659 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 93, + 636, + 523, + 659 + ], + "spans": [ + { + "bbox": [ + 93, + 636, + 523, + 659 + ], + "type": "text", + "content": "[9] Npu fusion attention. https://www.hiasmend.com/document/detail/zh/Pytorch/60RC1/apiref/apilist/ptaoplist_000139.html." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 88, + 662, + 523, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 662, + 523, + 696 + ], + "spans": [ + { + "bbox": [ + 88, + 662, + 523, + 696 + ], + "type": "text", + "content": "[10] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program synthesis with large language models. ArXiv, abs/2108.07732, 2021." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 88, + 700, + 523, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 700, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 88, + 700, + 523, + 723 + ], + "type": "text", + "content": "[11] Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023." + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 72, + 524, + 723 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 87, + 72, + 523, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 72, + 523, + 95 + ], + "spans": [ + { + "bbox": [ + 87, + 72, + 523, + 95 + ], + "type": "text", + "content": "[12] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In AAAI Conference on Artificial Intelligence, 2019." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 87, + 98, + 524, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 98, + 524, + 208 + ], + "spans": [ + { + "bbox": [ + 87, + 98, + 524, + 208 + ], + "type": "text", + "content": "[13] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harrison Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. ArXiv, abs/2107.03374, 2021." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 87, + 211, + 523, + 343 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 211, + 523, + 343 + ], + "spans": [ + { + "bbox": [ + 87, + 211, + 523, + 343 + ], + "type": "text", + "content": "[14] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 346, + 523, + 380 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 346, + 523, + 380 + ], + "spans": [ + { + "bbox": [ + 88, + 346, + 523, + 380 + ], + "type": "text", + "content": "[15] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 88, + 382, + 523, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 382, + 523, + 416 + ], + "spans": [ + { + "bbox": [ + 88, + 382, + 523, + 416 + ], + "type": "text", + "content": "[16] Karl Cobbe, Vineet Kosaraju, Mo Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. ArXiv, abs/2110.14168, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 88, + 418, + 523, + 442 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 418, + 523, + 442 + ], + "spans": [ + { + "bbox": [ + 88, + 418, + 523, + 442 + ], + "type": "text", + "content": "[17] Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Representations, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 445, + 523, + 489 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 445, + 523, + 489 + ], + "spans": [ + { + "bbox": [ + 88, + 445, + 523, + 489 + ], + "type": "text", + "content": "[18] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 87, + 492, + 523, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 492, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 87, + 492, + 523, + 723 + ], + "type": "text", + "content": "[19] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, Wangding Zeng, Wanjia Zhao, Wei An, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, X. Q. Li, Xiangyue Jin, Xianzu Wang, Xiao Bi, Xiaodong Liu, Xiaohan Wang, Xiaojin Shen, Xiaokang Chen, Xiaokang Zhang, Xiaosha Chen, Xiaotao Nie, Xiaowen Sun, Xiaoxiang Wang, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xingkai Yu, Xinnan Song, Xinxia Shan, Xinyi Zhou, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, Y. K. Li, Y. Q. Wang, Y. X. Wei, Y. X. Zhu, Yang Zhang, Yanhong Xu, Yanhong Xu, Yanping Huang, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Li, Yaohui Wang, Yi Yu, Yi Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng" + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 72, + 524, + 723 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 108, + 72, + 523, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 72, + 523, + 129 + ], + "spans": [ + { + "bbox": [ + 108, + 72, + 523, + 129 + ], + "type": "text", + "content": "Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Z. F. Wu, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhen Huang, Zhen Zhang, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhibin Gou, Zhicheng Ma, Zhigang Yan, Zhihong Shao, Zhipeng Xu, Zhiyu Wu, Zhongyu Zhang, Zhuoshu Li, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Ziyi Gao, and Zizheng Pan. Deepseek-v3 technical report, 2025." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 132, + 524, + 176 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 132, + 524, + 176 + ], + "spans": [ + { + "bbox": [ + 88, + 132, + 524, + 176 + ], + "type": "text", + "content": "[20] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 19822-19835. Curran Associates, Inc., 2021." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 179, + 523, + 214 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 179, + 523, + 214 + ], + "spans": [ + { + "bbox": [ + 88, + 179, + 523, + 214 + ], + "type": "text", + "content": "[21] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In North American Chapter of the Association for Computational Linguistics, 2019." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 217, + 522, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 217, + 522, + 251 + ], + "spans": [ + { + "bbox": [ + 88, + 217, + 522, + 251 + ], + "type": "text", + "content": "[22] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 88, + 255, + 522, + 289 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 255, + 522, + 289 + ], + "spans": [ + { + "bbox": [ + 88, + 255, + 522, + 289 + ], + "type": "text", + "content": "[23] Huibin Ge, Chenxi Sun, Deyi Xiong, and Qun Liu. Chinese wplc: A chinese dataset for evaluating pretrained language models on word prediction given long-range context. In Conference on Empirical Methods in Natural Language Processing, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 88, + 293, + 523, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 293, + 523, + 326 + ], + "spans": [ + { + "bbox": [ + 88, + 293, + 523, + 326 + ], + "type": "text", + "content": "[24] Aryo Pradipta Gema, Joshua Ong Jun Leang, Giwon Hong, Alessio Devoto, Alberto Carlo Maria Mancino, Rohit Saxena, Xuanli He, Yu Zhao, Xiaotang Du, Mohammad Reza Ghasemi Madani, et al. Are we done with mmlu? arXiv preprint arXiv:2406.04127, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 330, + 523, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 330, + 523, + 363 + ], + "spans": [ + { + "bbox": [ + 88, + 330, + 523, + 363 + ], + "type": "text", + "content": "[25] Aaron Grattafori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 88, + 368, + 523, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 368, + 523, + 390 + ], + "spans": [ + { + "bbox": [ + 88, + 368, + 523, + 390 + ], + "type": "text", + "content": "[26] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida Wang. Cruxeval: A benchmark for code reasoning, understanding and execution. ArXiv, abs/2401.03065, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 88, + 394, + 523, + 417 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 394, + 523, + 417 + ], + "spans": [ + { + "bbox": [ + 88, + 394, + 523, + 417 + ], + "type": "text", + "content": "[27] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ArXiv, abs/2009.03300, 2020." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 88, + 420, + 523, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 420, + 523, + 453 + ], + "spans": [ + { + "bbox": [ + 88, + 420, + 523, + 453 + ], + "type": "text", + "content": "[28] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. *ArXiv*, abs/2103.03874, 2021." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 88, + 457, + 523, + 492 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 457, + 523, + 492 + ], + "spans": [ + { + "bbox": [ + 88, + 457, + 523, + 492 + ], + "type": "text", + "content": "[29] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 88, + 495, + 523, + 551 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 495, + 523, + 551 + ], + "spans": [ + { + "bbox": [ + 88, + 495, + 523, + 551 + ], + "type": "text", + "content": "[30] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 103-112, 2019." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 88, + 555, + 523, + 598 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 555, + 523, + 598 + ], + "spans": [ + { + "bbox": [ + 88, + 555, + 523, + 598 + ], + "type": "text", + "content": "[31] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Fanchao Qi, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. ArXiv, abs/2305.08322, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 88, + 602, + 523, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 602, + 523, + 647 + ], + "spans": [ + { + "bbox": [ + 88, + 602, + 523, + 647 + ], + "type": "text", + "content": "[32] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 88, + 651, + 523, + 685 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 651, + 523, + 685 + ], + "spans": [ + { + "bbox": [ + 88, + 651, + 523, + 685 + ], + "type": "text", + "content": "[33] Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, and Yuxiong He. Deepspeed ulysses: System optimizations for enabling training of extreme long sequence transformer models, 2023." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "type": "text", + "content": "[34] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 72, + 523, + 723 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 88, + 72, + 523, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 72, + 523, + 129 + ], + "spans": [ + { + "bbox": [ + 88, + 72, + 523, + 129 + ], + "type": "text", + "content": "[35] Ziheng Jiang, Haibin Lin, Yinmin Zhong, Qi Huang, Yangrui Chen, Zhi Zhang, Yanghua Peng, Xiang Li, Cong Xie, Shibiao Nong, Yulu Jia, Sun He, Hongmin Chen, Zhihao Bai, Qi Hou, Shipeng Yan, Ding Zhou, Yiyao Sheng, Zhuo Jiang, Haohan Xu, Haoran Wei, Zhang Zhang, Pengfei Nie, Leqi Zou, Sida Zhao, Liang Xiang, Zherui Liu, Zhe Li, Xiaoying Jia, Jianxi Ye, Xin Jin, and Xin Liu. Megascale: Scaling large language model training to more than 10,000 gpus, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 131, + 523, + 153 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 131, + 523, + 153 + ], + "spans": [ + { + "bbox": [ + 88, + 131, + 523, + 153 + ], + "type": "text", + "content": "[36] Cameron R Jones and Benjamin K Bergen. Large language models pass the Turing test. arXiv preprint arXiv:2503.23674, 2025." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 157, + 523, + 180 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 157, + 523, + 180 + ], + "spans": [ + { + "bbox": [ + 88, + 157, + 523, + 180 + ], + "type": "text", + "content": "[37] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. ArXiv, abs/1705.03551, 2017." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 183, + 523, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 183, + 523, + 217 + ], + "spans": [ + { + "bbox": [ + 88, + 183, + 523, + 217 + ], + "type": "text", + "content": "[38] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 88, + 220, + 523, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 220, + 523, + 243 + ], + "spans": [ + { + "bbox": [ + 88, + 220, + 523, + 243 + ], + "type": "text", + "content": "[39] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 88, + 246, + 523, + 291 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 246, + 523, + 291 + ], + "spans": [ + { + "bbox": [ + 88, + 246, + 523, + 291 + ], + "type": "text", + "content": "[40] Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Eduardo Blanco and Wei Lu, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 294, + 523, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 294, + 523, + 350 + ], + "spans": [ + { + "bbox": [ + 88, + 294, + 523, + 350 + ], + "type": "text", + "content": "[41] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc V. Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 88, + 353, + 523, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 353, + 523, + 376 + ], + "spans": [ + { + "bbox": [ + 88, + 353, + 523, + 376 + ], + "type": "text", + "content": "[42] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. Race: Large-scale reading comprehension dataset from examinations. ArXiv, abs/1704.04683, 2017." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 88, + 378, + 523, + 413 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 378, + 523, + 413 + ], + "spans": [ + { + "bbox": [ + 88, + 378, + 523, + 413 + ], + "type": "text", + "content": "[43] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training. CoRR, abs/2006.15704, 2020." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 88, + 415, + 523, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 415, + 523, + 460 + ], + "spans": [ + { + "bbox": [ + 88, + 415, + 523, + 460 + ], + "type": "text", + "content": "[44] Shenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, and Yang You. Sequence parallelism: Long sequence training from system perspective. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2391-2404, Toronto, Canada, July 2023. Association for Computational Linguistics." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 88, + 463, + 523, + 486 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 463, + 523, + 486 + ], + "spans": [ + { + "bbox": [ + 88, + 463, + 523, + 486 + ], + "type": "text", + "content": "[45] Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. From live data to high-quality benchmarks: The arena-hard pipeline, April 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 88, + 489, + 523, + 523 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 489, + 523, + 523 + ], + "spans": [ + { + "bbox": [ + 88, + 489, + 523, + 523 + ], + "type": "text", + "content": "[46] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 88, + 526, + 523, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 526, + 523, + 559 + ], + "spans": [ + { + "bbox": [ + 88, + 526, + 523, + 559 + ], + "type": "text", + "content": "[47] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. In EMNLP (1), pages 5747-5763. Association for Computational Linguistics, 2020." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 88, + 563, + 523, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 563, + 523, + 586 + ], + "spans": [ + { + "bbox": [ + 88, + 563, + 523, + 586 + ], + "type": "text", + "content": "[48] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 88, + 589, + 523, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 589, + 523, + 612 + ], + "spans": [ + { + "bbox": [ + 88, + 589, + 523, + 612 + ], + "type": "text", + "content": "[49] MAA. Codeforces. American Invitational Mathematics Examination - AIME 2024, 2024. https://maa.org/math-competitions/american-invitational-mathematics-examination-aime." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 88, + 615, + 523, + 637 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 615, + 523, + 637 + ], + "spans": [ + { + "bbox": [ + 88, + 615, + 523, + 637 + ], + "type": "text", + "content": "[50] William Merrill and Ashish Sabharwal. A little depth goes a long way: The expressive power of log-depth transformers, 2025." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 88, + 641, + 523, + 686 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 641, + 523, + 686 + ], + "spans": [ + { + "bbox": [ + 88, + 641, + 523, + 686 + ], + "type": "text", + "content": "[51] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, Huntsville, ON, Canada, October 27-30, 2019, pages 1-15. ACM, 2019." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "spans": [ + { + "bbox": [ + 88, + 689, + 523, + 723 + ], + "type": "text", + "content": "[52] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Zaharia. Efficient large-scale language model training ongpu clusters using megatron-lm. In" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 72, + 524, + 723 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 108, + 72, + 523, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 72, + 523, + 95 + ], + "spans": [ + { + "bbox": [ + 108, + 72, + 523, + 95 + ], + "type": "text", + "content": "Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '21, New York, NY, USA, 2021. Association for Computing Machinery." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 98, + 524, + 121 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 98, + 524, + 121 + ], + "spans": [ + { + "bbox": [ + 88, + 98, + 524, + 121 + ], + "type": "text", + "content": "[53] Toan Q Nguyen and Julian Salazar. Transformers without tears: Improving the normalization of self-attention. arXiv preprint arXiv:1910.05895, 2019." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 125, + 522, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 125, + 522, + 158 + ], + "spans": [ + { + "bbox": [ + 88, + 125, + 522, + 158 + ], + "type": "text", + "content": "[54] Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and R. Fernández. The lambada dataset: Word prediction requiring a broad discourse context. ArXiv, abs/1606.06031, 2016." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 162, + 522, + 185 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 162, + 522, + 185 + ], + "spans": [ + { + "bbox": [ + 88, + 162, + 522, + 185 + ], + "type": "text", + "content": "[55] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient context window extension of large language models, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 89, + 188, + 502, + 200 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 188, + 502, + 200 + ], + "spans": [ + { + "bbox": [ + 89, + 188, + 502, + 200 + ], + "type": "text", + "content": "[56] Penghui Qi, Xinyi Wan, Guangxing Huang, and Min Lin. Zero bubble pipeline parallelism, 2023." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 89, + 203, + 522, + 225 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 203, + 522, + 225 + ], + "spans": [ + { + "bbox": [ + 89, + 203, + 522, + 225 + ], + "type": "text", + "content": "[57] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 88, + 229, + 523, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 229, + 523, + 274 + ], + "spans": [ + { + "bbox": [ + 88, + 229, + 523, + 274 + ], + "type": "text", + "content": "[58] Samyam Rajbhandari, Jeff Rasley, Olatunj Ruwase, and Yuxiong He. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM, 2020." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 89, + 277, + 523, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 277, + 523, + 311 + ], + "spans": [ + { + "bbox": [ + 89, + 277, + 523, + 311 + ], + "type": "text", + "content": "[59] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 89, + 314, + 474, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 314, + 474, + 326 + ], + "spans": [ + { + "bbox": [ + 89, + 314, + 474, + 326 + ], + "type": "text", + "content": "[60] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 89, + 329, + 522, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 329, + 522, + 363 + ], + "spans": [ + { + "bbox": [ + 89, + 329, + 522, + 363 + ], + "type": "text", + "content": "[61] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. Language models are multilingual chain-of-thought reasoners. ArXiv, abs/2210.03057, 2022." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 89, + 366, + 522, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 366, + 522, + 400 + ], + "spans": [ + { + "bbox": [ + 89, + 366, + 522, + 400 + ], + "type": "text", + "content": "[62] Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. 1999." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 89, + 403, + 523, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 403, + 523, + 426 + ], + "spans": [ + { + "bbox": [ + 89, + 403, + 523, + 426 + ], + "type": "text", + "content": "[63] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 89, + 429, + 522, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 429, + 522, + 453 + ], + "spans": [ + { + "bbox": [ + 89, + 429, + 522, + 453 + ], + "type": "text", + "content": "[64] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 89, + 456, + 523, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 456, + 523, + 500 + ], + "spans": [ + { + "bbox": [ + 89, + 456, + 523, + 500 + ], + "type": "text", + "content": "[65] Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. In Annual Meeting of the Association for Computational Linguistics, 2022." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 89, + 503, + 522, + 526 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 503, + 522, + 526 + ], + "spans": [ + { + "bbox": [ + 89, + 503, + 522, + 526 + ], + "type": "text", + "content": "[66] Sho Takase, Shun Kiyono, Sosuke Kobayashi, and Jun Suzuki. Spike no more: Stabilizing the pre-training of large language models, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 89, + 530, + 523, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 530, + 523, + 563 + ], + "spans": [ + { + "bbox": [ + 89, + 530, + 523, + 563 + ], + "type": "text", + "content": "[67] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 89, + 567, + 522, + 600 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 567, + 522, + 600 + ], + "spans": [ + { + "bbox": [ + 89, + 567, + 522, + 600 + ], + "type": "text", + "content": "[68] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 89, + 604, + 523, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 604, + 523, + 648 + ], + "spans": [ + { + "bbox": [ + 89, + 604, + 523, + 648 + ], + "type": "text", + "content": "[69] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In Anna Korhonen, David Traum, and Lluis Márquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy, July 2019. Association for Computational Linguistics." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 89, + 651, + 523, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 651, + 523, + 696 + ], + "spans": [ + { + "bbox": [ + 89, + 651, + 523, + 696 + ], + "type": "text", + "content": "[70] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max W.F. Ku, Kai Wang, Alex Zhuang, Rongqi \"Richard\" Fan, Xiang Yue, and Wenhu Chen. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. *ArXiv*, abs/2406.01574, 2024." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 89, + 700, + 522, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 89, + 700, + 522, + 723 + ], + "spans": [ + { + "bbox": [ + 89, + 700, + 522, + 723 + ], + "type": "text", + "content": "[71] Tianwen Wei, Jian Luan, W. Liu, Shuang Dong, and Bin Quan Wang. Cmath: Can your language model pass chinese elementary school math test? ArXiv, abs/2306.16636, 2023." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 72, + 523, + 236 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 88, + 72, + 523, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 72, + 523, + 106 + ], + "spans": [ + { + "bbox": [ + 88, + 72, + 523, + 106 + ], + "type": "text", + "content": "[72] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 108, + 523, + 133 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 108, + 523, + 133 + ], + "spans": [ + { + "bbox": [ + 88, + 108, + 523, + 133 + ], + "type": "text", + "content": "[73] Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 134, + 523, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 134, + 523, + 168 + ], + "spans": [ + { + "bbox": [ + 88, + 134, + 523, + 168 + ], + "type": "text", + "content": "[74] Mingjia Yin, Chuhan Wu, Yufei Wang, Hao Wang, Wei Guo, Yasheng Wang, Yong Liu, Ruiming Tang, Defu Lian, and Enhong Chen. Entropy law: The story behind data compression and llm performance. arXiv preprint arXiv:2407.06645, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 88, + 171, + 523, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 171, + 523, + 195 + ], + "spans": [ + { + "bbox": [ + 88, + 171, + 523, + 195 + ], + "type": "text", + "content": "[75] Mengxia Yu, De Wang, Qi Shan, Colorado Reed, and Alvin Wan. The super weight in large language models. ArXiv, abs/2411.07191, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 88, + 197, + 523, + 220 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 197, + 523, + 220 + ], + "spans": [ + { + "bbox": [ + 88, + 197, + 523, + 220 + ], + "type": "text", + "content": "[76] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Annual Meeting of the Association for Computational Linguistics, 2019." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 88, + 223, + 420, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 223, + 420, + 236 + ], + "spans": [ + { + "bbox": [ + 88, + 223, + 420, + 236 + ], + "type": "text", + "content": "[77] Biao Zhang and Rico Sennrich. Root mean square layer normalization, 2019." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 88, + 71, + 301, + 86 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 71, + 301, + 86 + ], + "spans": [ + { + "bbox": [ + 88, + 71, + 301, + 86 + ], + "type": "text", + "content": "A Contributions and Acknowledgments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 88, + 95, + 523, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 95, + 523, + 129 + ], + "spans": [ + { + "bbox": [ + 88, + 95, + 523, + 129 + ], + "type": "text", + "content": "Core Contributors Yichun Yin, Wenyong Huang, Kaikai Song, Yehui Tang, Xueyu Wu, Wei Guo, Peng Guo, Yaoyuan Wang, Xiaojun Meng, Yasheng Wang, Dong Li, Can Chen, Dandan Tu, Yin Li, Fisher Yu, Ruiming Tang, Yunhe Wang" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 88, + 133, + 523, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 133, + 523, + 190 + ], + "spans": [ + { + "bbox": [ + 88, + 133, + 523, + 190 + ], + "type": "text", + "content": "Contributors Baojun Wang, Bin Wang, Bo Wang, Boxiao Liu, Changzheng Zhang, Duyu Tang, Fei Mi, Hui Jin, Jiansheng Wei, Jiarui Qin, Jinpeng Li, Jun Zhao, Liqun Deng, Lin Li, Minghui Xu, Naifu Zhang, Nianzu Zheng, Qiang Li, Rongju Ruan, Shengjun Cheng, Tianyu Guo, Wei He, Wei Li, Weiwen Liu, Wulong Liu, Xinyi Dai, Yonghan Dong, Yu Pan, Yue Li, Yufei Wang, Yujun Li, Yunsheng Ni, Zhe Liu, Zhenhe Zhang, Zhicheng Liu" + } + ] + } + ], + "index": 2 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_content_list.json b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2b1260d97892c7d81bfe4728b97ca2e70a36bb5d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_content_list.json @@ -0,0 +1,9840 @@ +[ + { + "type": "text", + "text": "COLORBENCH: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness", + "text_level": 1, + "bbox": [ + 183, + 122, + 816, + 199 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yijun Liang\\*, Ming Li\\*, Chenrui Fan, Ziyue Li, Dang Nguyen, Kwesi Cobbina Shweta Bhardwaj, Jiuhai Chen, Fuxiao Liu, Tianyi Zhou", + "bbox": [ + 230, + 250, + 764, + 277 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "University of Maryland, College Park", + "bbox": [ + 375, + 279, + 622, + 292 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{yliang17,minglii,tianyi}@umd.edu", + "bbox": [ + 354, + 294, + 643, + 306 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Project: https://github.com/tianyi-lab/ColorBench", + "bbox": [ + 295, + 306, + 700, + 321 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 459, + 357, + 537, + 372 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Color plays an important role in human perception and usually provides critical clues in visual reasoning. However, it is unclear whether and how vision-language models (VLMs) can perceive, understand, and leverage color as humans. This paper introduces \"COLORBENCH\", an innovative benchmark meticulously crafted to assess the capabilities of VLMs in color understanding, including color perception, reasoning, and robustness. By curating a suite of diverse test scenarios, with grounding in real applications, COLORBENCH evaluates how these models perceive colors, infer meanings from color-based cues, and maintain consistent performance under varying color transformations. Through an extensive evaluation of 32 VLMs with varying language models and vision encoders, our paper reveals some undiscovered findings: (i) The scaling law (larger models are better) still holds on COLORBENCH, while the language model plays a more important role than the vision encoder. (ii) However, the performance gaps across models are relatively small, indicating that color understanding has been largely neglected by existing VLMs. (iii) CoT reasoning improves color understanding accuracies and robustness, though they are vision-centric tasks. (iv) Color clues are indeed leveraged by VLMs on COLORBENCH but they can also mislead models in some tasks. These findings highlight the critical limitations of current VLMs and underscore the need to enhance color comprehension. Our COLORBENCH can serve as a foundational tool for advancing the study of human-level color understanding of multimodal AI.", + "bbox": [ + 228, + 386, + 767, + 662 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 671, + 313, + 686 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Color is widely recognized as a fundamental component of human visual perception [11, 34], playing a critical role and providing critical clues in object detection, scene interpretation, contextual understanding, planning, etc., across critical application scenarios such as scientific discovery, medical care, remote sensing, shopping, visualization, artwork interpretation, etc. For instance, [19] leverages spectral color signatures to distinguish vegetation, health, and water bodies in satellite imagery, and [1] utilizes sediment color patterns to detect marine ecosystems. These applications underscore how color-driven features play an important role in real-world scenarios. Moreover, colors can convey affective or semantic information beyond simply recognizing and naming colors since colors are highly correlated to other attributes or concepts and thus can provide key information to various downstream tasks that do not even directly ask about colors [18, 37, 45]. As modern vision-language models (VLMs) [12, 41, 48] continue to be deployed to increasingly diverse scenarios, color—an essential visual feature—plays a growing role in the processes of understanding and reasoning. It is essential to examine whether and how these models can understand and leverage color information", + "bbox": [ + 169, + 700, + 826, + 881 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.10514v3 [cs.CV] 8 Nov 2025", + "bbox": [ + 22, + 279, + 57, + 715 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*These authors contributed equally to this work.", + "bbox": [ + 189, + 888, + 478, + 902 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "39th Conference on Neural Information Processing Systems (NeurIPS 2025) Track on Datasets and Benchmarks.", + "bbox": [ + 171, + 922, + 826, + 936 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg", + "image_caption": [ + "Figure 1: Test samples from COLORBENCH. COLORBENCH evaluates VLMs across three core capabilities: Perception, Reasoning and Robustness. The benchmark comprises 11 tasks designed to assess fine-grained color understanding abilities and the effect of color on other reasoning skills, including counting, proportion calculation, and robustness estimation. With over 1,400 instances, COLORBENCH covers a wide range of real-world application scenarios, including painting analysis, test kit readings, shopping, satellite/wildlife image analysis, etc." + ], + "image_footnote": [], + "bbox": [ + 173, + 88, + 318, + 426 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 323, + 88, + 671, + 426 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 679, + 88, + 810, + 426 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "as in human perception and reasoning, how color influences their overall perceptual and reasoning capabilities, and whether they can interpret visual illusions, resolve ambiguous cues, and maintain reliable performance under color variations.", + "bbox": [ + 169, + 503, + 823, + 547 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "However, existing benchmarks for VLMs mainly focus on tasks that may not heavily depend on color understanding or require color-centric reasoning, thereby overlooking nuanced color-related factors [25, 29]. Hence, there is a lack of benchmarks that systematically assess how well VLMs understand color when it serves as the main or distinguishing feature of a scene and key information to a task. Moreover, robustness to variations in color, such as recoloring and shifting hues, has also been largely neglected in the LLM era [6, 8, 20]. Consequently, it remains unclear whether VLMs can perceive and reason about color with human-like proficiency and to what extent their performance deteriorates under significant color perturbations. This shortfall underscores the need for a dedicated benchmark that comprehensively probes various facets of color comprehension in VLMs. A detailed discussion of related works is provided in Appendix A.", + "bbox": [ + 169, + 551, + 826, + 691 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To bridge this gap, we propose a novel benchmark, COLORBENCH, that aims at comprehensively evaluating VLMs on three core capabilities of color understanding: Color Perception, Color Reasoning, and Color Robustness. Color Perception examines VLMs' fundamental capability to correctly detect and interpret colors from inputs. Color Reasoning refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. Color Robustness assesses how consistently VLMs perform when an image's colors are altered, ensuring they maintain accurate predictions across different color variants of an image. Under these three core dimensions, 11 fine-grained tasks assessing different aspects of color understanding capabilities are formulated as shown in Figure 1, which not only shows test examples in COLORBENCH but also presents potential real-world applications.", + "bbox": [ + 169, + 696, + 826, + 851 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "By focusing on these facets, COLORBENCH offers a granular view of VLMs' capabilities in color understanding, aiming to illuminate both their strengths and shortcomings. We evaluate 32 widely used VLMs in our benchmark, ranging from open-source to proprietary models, from relatively small models (0.5B) to larger models (78B), and obtain some unrevealed observations.", + "bbox": [ + 169, + 854, + 826, + 912 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Main Contribution. We introduce \"COLORBENCH\", the first dedicated benchmark for assessing the color perception, reasoning, and robustness of VLMs. We develop an evaluation suite for 11 color-centric tasks, covering diverse application scenarios and practical challenges. Moreover, we report a fine-grained empirical evaluation of 32 state-of-the-art VLMs, which exposes their limitations in color understanding and offers novel insights for future research. Our key findings are highlighted in the following:", + "bbox": [ + 169, + 90, + 823, + 175 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. The scaling law still holds for color understanding but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs.", + "2. The absolute performances of different VLMs are relatively low, and the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs.", + "3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community.", + "4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLOR-BENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to give wrong answers, and converting colorful images into grayscale can improve the accuracy." + ], + "bbox": [ + 207, + 186, + 823, + 369 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 COLORBENCH Construction", + "text_level": 1, + "bbox": [ + 171, + 388, + 447, + 405 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We present COLORBENCH, the first benchmark explicitly designed to comprehensively evaluate the color understanding capabilities of VLMs across three key dimensions: Color Perception, Color Reasoning, and Color Robustness. This benchmark consists of 1,448 instances and 5,814 image-text questions spanning 11 diverse tasks. For the Color Perception and Color Reasoning categories, each instance contains an image, a question, and multiple-choice (3 to 6) options, with only one correct answer. For Color Robustness, each instance consists of 10 multiple-choice image-text questions, including a seed image and 9 edited images with color changes. Given that color is a fundamental visual feature influencing most vision-related tasks, disentangling color under", + "bbox": [ + 169, + 420, + 500, + 642 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg", + "image_caption": [ + "Figure 2: Statistics of 3 categories and 11 tasks in COLORBENCH." + ], + "image_footnote": [], + "bbox": [ + 508, + 412, + 823, + 599 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "standing from other general capabilities (e.g., object recognition, counting) is challenging. To address this, we design questions with explicit color constraints for Color Perception and Reasoning dimensions, enabling a focused evaluation of VLMs' perception and reasoning abilities in relation to color.", + "bbox": [ + 169, + 642, + 826, + 684 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.1 Taxonomy", + "text_level": 1, + "bbox": [ + 171, + 700, + 285, + 715 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Motivated by the existing evaluation criteria from prior benchmarks and real-world application scenarios, we categorize the color understanding capability into 3 core dimensions and 11 detailed axes, as shown in Figure 1. The detailed question templates and sample cases are shown in Appendix D.", + "bbox": [ + 169, + 726, + 826, + 768 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.1.1 Color Perception", + "text_level": 1, + "bbox": [ + 171, + 782, + 344, + 799 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This core dimension refers to the fundamental capability to correctly detect and interpret colors from inputs. We assess this capability through 3 key aspects: i) Color Recognition, ii) Color Extraction, and iii) Object Recognition.", + "bbox": [ + 169, + 806, + 825, + 849 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Color Recognition includes questions that either ask for the color of a given object or determine whether a specific color is present in the image. Color Extraction requires the model to extract the value of color code (e.g., RGB, HSV, or HEX) for a given single color image. This task measures the ability to perform fine-grained color retrieval from visual input. Object Recognition evaluates the", + "bbox": [ + 169, + 854, + 823, + 912 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "model's capability to identify objects that match a specified color described in the text input. These two tasks require VLMs to be able to detect and interpret the color in either the image or text input.", + "bbox": [ + 169, + 90, + 823, + 122 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.1.2 Color Reasoning", + "text_level": 1, + "bbox": [ + 171, + 133, + 344, + 148 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This dimension refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. This category encapsulates 7 key aspects: i) Color Proportion, ii) Color Comparison, iii) Color Counting, iv) Object Counting, v) Color Illusion, vii) Color Mimicry and viii) Color Blindness.", + "bbox": [ + 169, + 156, + 826, + 214 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Color Proportion tests the model's capability to estimate the relative area occupied by a specific color. Questions in this task require both color perception and proportion calculation capabilities. Color Comparison requires the model to be able to distinguish among multiple colors in the image, assessing its sensitivity to hue, saturation, and brightness differences in visual input. Color Counting focuses on identifying the number of unique colors in the image, evaluating the model's perception and differentiation of distinct color variations, and counting ability. Object Counting extends this challenge by requiring the model to count objects that match a specific color pattern. This task requires an integration of object recognition and color perception. Color Illusion questions query VLMs to compare colors in potential illusionary environments. This task evaluates the model's ability to account for color-induced optical illusions. Color Mimicry challenges the model to detect objects camouflaged within their surroundings, where color serves as a misleading factor, requiring advanced pattern recognition and contextual reasoning. These two tasks both assess the model's ability to make correct predictions under the misleading of color-related information in visual input. Color Blindness, inspired by Ishihara tests, assesses the model's ability to recognize numbers or text embedded in color patterns, testing its understanding of shape-color relationships. These 7 tasks comprehensively assess the model's capacity for logical reasoning, spatial awareness, and adaptive interpretation of color-based visual cues.", + "bbox": [ + 169, + 218, + 826, + 454 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.1.3 Color Robustness", + "text_level": 1, + "bbox": [ + 171, + 465, + 349, + 481 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Color Robustness assesses how consistently VLMs perform and whether they can consistently deliver accurate predictions under color variants of a given image. It involves measuring the stability of a VLM's responses when confronted with the same text input and a series of recolored images. To ensure that color does not influence the predictions, we select questions and corresponding answers that are independent of color attributes. Under these conditions, a robust model should produce unchanged predictions regardless of recoloring manipulation. Any variation in the model's responses is then used to quantify its susceptibility to color changes, providing a direct measure of robustness.", + "bbox": [ + 169, + 489, + 486, + 698 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.2 Data Curation", + "text_level": 1, + "bbox": [ + 171, + 713, + 316, + 727 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For most of the tasks in the category of Color Perception and Color Reasoning, we rely on human experts to manually collect images from multiple online benchmarks and websites. For the Color Proportion task, to ensure the correctness of the ground truth, an extra color extrac", + "bbox": [ + 169, + 738, + 486, + 821 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "tion tool is firstly utilized to obtain the color histogram of the image. Questions and options are then manually designed based on these color statistics. For tasks including Color Extraction, Color Blindness, and Color Illusion, testing images are generated by corresponding code programs to ensure the controllability of the questions and answers. The detailed data sources are shown in Appendix B.", + "bbox": [ + 169, + 821, + 826, + 878 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After the initial data is collected, additional filtering processes are conducted in a human-machine interactive process. We first conduct inference on a variety of VLMs and discard low-quality samples", + "bbox": [ + 169, + 883, + 825, + 912 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg", + "image_caption": [ + "Figure 3: Generation Pipeline for Color Robustness. For each seed image, we apply 3 recoloring strategies (Entire Image, Target Segment, Largest Segment) to generate edited images. For each strategy, we change the color of the recoloring region via shifting the Hue values by $90^{\\circ}$ , $180^{\\circ}$ , or $270^{\\circ}$ in HSV color space." + ], + "image_footnote": [], + "bbox": [ + 513, + 477, + 810, + 707 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 491, + 935, + 504, + 946 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "based on the GPT-4o prediction result and human evaluation. For synthesized data, similar processes are conducted, but with additional code (for generation) and image assessment. The above process is conducted in three rounds before the final benchmark instances are settled. This refinement process ensures COLORBENCH a rigorous and informative benchmark for assessing color-related understanding.", + "bbox": [ + 169, + 90, + 823, + 148 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For Color Robustness, we create evaluation instances by modifying images or specific regions through color changes. We define 3 recoloring strategies to determine the recoloring region: i) Entire Image, where the whole image is recolored; ii) Target Segment, where only the segment relevant to the question is altered; and iii) Largest Segment, where the largest region unrelated to the question is modified. Further details can be found in Appendix C. While generating color variants, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region and perform recoloring by shifting the Hue value in the HSV color space at three levels to cover entire color wheel: $(90^{\\circ}, 180^{\\circ},$ and $270^{\\circ})$ . This process produces 9 variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation.", + "bbox": [ + 169, + 152, + 826, + 333 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "2.3 Evaluation Metrics", + "text_level": 1, + "bbox": [ + 171, + 348, + 346, + 362 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For Perception and Reasoning, we use accuracy as the evaluation metric, as all tasks follow a multiple-choice format. Accuracy is computed per task and per category, representing the proportion of correctly answered questions.", + "bbox": [ + 169, + 375, + 823, + 417 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For Robustness, we evaluate a model's ability to maintain consistent accurate predictions under color variations. As detailed in Section 2.2, each seed image $I_{s}$ is transformed into $n$ recolored variants using recoloring strategies, while keeping the original question $q$ unchanged. A model $\\mathcal{M}$ is considered robust on a seed image $I_{s}$ and corresponding question $q$ if and only if it provides a correct prediction for $I_{s}$ and maintains correct on all $n$ recolored versions. To quantify robustness, we define the instance-level robustness metric $R(I_s,q)\\in \\{0,1\\}$ and a model-level robustness metric $Robust_{\\mathcal{M}}\\in [0,1]$ .", + "bbox": [ + 169, + 422, + 825, + 506 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Instance-level Robustness. Let the recolored images be $I_1, \\dots, I_n$ and the generation output of model for image $I_i$ and question $q$ is $\\mathcal{M}(I_i, q)$ . Define $c(\\mathcal{M}(I_i, q))$ as the model correctness: $c(\\mathcal{M}(I_i, q)) = 1$ if model result $\\mathcal{M}(I_i, q)$ is correct, otherwise 0. The instance-level robustness metric $R(I_s, q)$ for a seed image $I_s$ and question $q$ is defined as:", + "bbox": [ + 169, + 510, + 823, + 563 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR \\left(I _ {s}, q\\right) = \\left\\{ \\begin{array}{l l} 1 & \\text {i f} c \\left(\\mathcal {M} \\left(I _ {i}, q\\right)\\right) = c \\left(\\mathcal {M} \\left(I _ {s}, q\\right)\\right) = 1, \\forall i \\in [ n ] \\\\ 0 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 308, + 566, + 825, + 604 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Overall Robustness. Let $S$ be the set of seed images. We define model robustness to be:", + "bbox": [ + 169, + 613, + 759, + 627 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {R o b u s t} _ {\\mathcal {M}} = \\frac {\\sum_ {I _ {s} \\in \\mathcal {S}} R \\left(I _ {s}\\right)}{| \\mathcal {S} |}, \\operatorname {R o b u s t} _ {\\mathcal {M}} \\in [ 0, 1 ] \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 341, + 631, + 823, + 665 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Robust $_{\\mathcal{M}}$ represents the proportion of seed images on which the model maintains correctness across all color variations. A model is more robust when Robust $_{\\mathcal{M}}$ is higher.", + "bbox": [ + 169, + 669, + 823, + 698 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Experimental Results", + "text_level": 1, + "bbox": [ + 171, + 715, + 385, + 733 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Main Results", + "text_level": 1, + "bbox": [ + 171, + 746, + 305, + 760 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 1 presents the performances of a wide range of VLMs, along with human evaluation results on our COLORBENCH. Human participants achieve the highest performance on all evaluated tasks across all models. Among the models, overall accuracy generally increases with model size, with larger models tend to outperform smaller models, and the two proprietary models, GPT-4o and Gemini-2-flash, perform the best $^2$ .", + "bbox": [ + 169, + 772, + 823, + 842 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Color Perception. In Color Recognition (C'Recog), most models perform well (above $60\\%$ ), indicating that this task is relatively basic for color perception. Gemini-2 with CoT obtains the", + "bbox": [ + 169, + 847, + 826, + 876 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "To examine the upper limits of VLM capabilities and benchmark against human-level performance, we also assess performance GPT-o3 on perception and reasoning tasks. The result is shown in Appendix H.", + "bbox": [ + 169, + 883, + 823, + 912 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/d9a74d2d06d6bc02d62e50fbf3d1af7d17dac77d6d94345ca9038a8beb3a14fc.jpg", + "table_caption": [ + "Table 1: Performance of 32 VLMs (grouped by size) and human performance on COLORBENCH. Models are ranked within each group according to their overall performance on Color Perception and Reasoning (P & R Overall) tasks. For human evaluation, Color Extraction task is excluded, as humans are not attuned to precise color code differences. The best performance in each VLM group is highlighted in bold. For human evaluation, any instance surpassing all VLMs is marked in bold." + ], + "table_footnote": [], + "table_body": "
Color PerceptionColor ReasoningP & RRobustness
C*RecogC*ExtractO*RecogC*PropC*CompC*CountO*CountC*IlluC*MimicC*BlindOverallC*Robust
VLMs: < 7B
LLaVA-OV-0.5B26.344.846.830.023.822.621.438.758.626.832.638.7
InternVL2-1B35.534.459.723.841.619.622.334.438.633.133.639.4
InternVL2-2B60.536.566.240.038.619.629.126.952.921.036.454.2
InternVL2.5-1B55.336.561.042.545.522.625.243.041.428.038.352.3
InternVL2.5-2B69.728.171.433.848.525.530.132.355.719.838.559.8
Qwen2.5-VL-3B72.438.574.043.848.522.625.243.045.724.241.163.7
Cambrian-3B67.131.366.247.550.525.529.144.161.422.341.559.0
VLMs: 7B - 8B
LLaVA-Next-v-7B29.038.557.121.334.723.525.238.741.417.831.252.1
LLaVA-Next-m-7B21.118.863.627.542.616.734.041.947.129.933.455.2
Eagle-X5-7B52.647.967.541.342.620.635.044.148.622.940.048.5
Cambrian-8B72.428.172.748.854.531.433.041.957.117.242.364.9
InternVL2-8B72.450.077.942.548.520.635.938.750.023.643.165.5
Eagle-X4-8B71.147.968.845.050.526.537.940.948.627.444.163.7
LLAVA-OV-7B71.153.181.852.553.519.626.248.448.623.644.774.0
InternVL2.5-8B77.647.983.150.062.425.533.034.452.919.845.269.8
Qwen2.5-VL-7B76.349.084.447.552.519.634.044.155.728.746.274.4
VLMs: 10B - 30B
LLaVA-Next-13B56.631.371.427.541.627.528.229.045.725.536.453.3
Cambrian-13B67.134.474.046.347.532.435.038.755.724.842.864.7
Eagle-X4-13B73.743.876.643.847.523.538.834.457.126.143.766.3
InternVL2-26B72.452.187.052.556.420.635.034.455.727.446.374.0
InternVL2.5-26B72.445.889.645.063.422.635.032.362.929.346.883.0
VLMs: 30B - 70B
Eagle-X5-34B79.027.180.548.848.523.535.937.660.025.543.467.1
Cambrian-34b75.057.377.950.046.522.632.037.664.324.245.367.7
InternVL2-40B72.452.183.151.361.419.635.934.458.621.045.678.7
LLAVA-Next-34b69.746.976.643.856.428.441.836.661.429.946.665.9
InternVL2.5-38B71.160.489.653.863.429.440.834.461.426.850.084.6
VLMs: > 70B
InternVL2-76B72.442.785.745.062.427.535.031.250.023.644.668.6
LLAVA-Next-72B72.454.279.241.349.524.535.933.348.634.445.266.5
InternVL2.5-78B75.058.381.843.868.327.536.934.461.428.748.886.2
LLAVA-OV-72B73.763.583.152.569.327.550.536.655.731.951.980.3
VLMs: Proprietary
GPT-4o76.340.680.538.366.330.429.150.570.058.652.946.2
Gemini-2-flash80.352.187.046.970.333.334.944.172.949.655.470.7
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.469.9
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.673.6
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0--
", + "bbox": [ + 173, + 167, + 823, + 579 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "highest performance. In Color Extraction (C'Extra), to our surprise, the two powerful proprietary models without CoT prompting only reach the middle-tier performances, indicating the potential limitation on the color perception of their vision encoders. Similar to the Color Existence task, almost all the models perform well in Object Recognition (O'Recog), and the 2 proprietary models do not reach the top. This is probably due to the strong alignment between this task and the common training recipe, which includes abundant general object detection images.", + "bbox": [ + 169, + 587, + 823, + 672 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Color Reasoning. In Color Proportion (C'Prop), even the best model, Gemini-2 with CoT, can only reach $58.0\\%$ of the accuracy, which is almost only slightly better than random guessing, showcasing the supreme difficulty of this task. In Color Comparison (C'Comp), larger models perform better in this task, and the proprietary models with CoT reach the top performance unsurprisingly. Surprisingly, in Color Counting (C'Count), all models show extremely poor performances. The highest performance comes from Gemini-2 with CoT, exceeding the second place by 10 percent, although its performance is also unsatisfactory at only $43.1\\%$ . In Object Counting (O'Count), surpassing the 2 proprietary models, LLaVA-OV-72B reaches the top and becomes the only model that exceeds $50\\%$ of the accuracy. Similar to the findings from the Object Recognition task, this might be caused by the extremely adequate object detection tasks in open-sourced training recipes. In Color Illusion (C'Ilu), the accuracies of most models lie in the range of $30\\%$ to $50\\%$ , and GPT-4o without CoT is the only one that exceeds $50\\%$ of the accuracy. In Color Mimicry (C'Mimic), the 2 proprietary models reach the top, while more reasoning steps do not benefit a lot. In Color Blindness (C'Blind), most of the open-sourced models present accuracies under $30\\%$ . Considering the extremely practical usage of this scenario, we think the current community should pay more attention to this. Moreover, we also observe that, surprisingly, more reasoning steps benefit VLMs in the color blindness test, although it seems like a pure color perception task.", + "bbox": [ + 169, + 676, + 826, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/dd58b55e29f30c324245c853130868c2b7d326483e1f84d0e7d4d40a90702f97.jpg", + "table_caption": [ + "Table 2: Spearman's rank correlation between VLM performance and different model parts' sizes on each task. L denotes the language model part's size and V represents the vision encoder part's size. We use “(*)” to mark correlations with p-values $\\leq 0.05$ . It shows that the scaling law still holds for color understanding but it is much weaker." + ], + "table_footnote": [], + "table_body": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'I'lluC'MimicC'BlindOverallC'Robust
L+V0.5657 (*)0.5255 (*)0.7107 (*)0.5125 (*)0.6358 (*)0.4316 (*)0.7566 (*)-0.34600.4832 (*)0.24600.7619 (*)0.7386 (*)
L0.5724 (*)0.4937 (*)0.6769 (*)0.4696 (*)0.6118 (*)0.4408 (*)0.7611 (*)-0.3697 (*)0.4559 (*)0.28240.7436 (*)0.7123 (*)
V0.3955 (*)0.28560.5465 (*)0.6242 (*)0.5295 (*)0.20890.3608-0.01270.6024 (*)-0.06790.5271 (*)0.5623 (*)
", + "bbox": [ + 174, + 152, + 823, + 209 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Color Robustness. In Color Robustness (C'Robust), a higher value represents better robustness towards color alteration. The only 4 models that exceed $80\\%$ are LLaVA-OV-72B, InternVL2.5-26B, InternVL2.5-38B, and InternVL2.5-78B, which utilize relatively larger vision encoders, InternViT-6B, compared with others (mostly only 300-400M). In the meantime, GPT-4o has a really low robustness $(46.2\\%)$ to colors, indicating its vulnerable sensitivity to color changes, while Gemini-2 shows promising robustness $(70.7\\%)$ towards colors. Moreover, another surprising observation is that even though only the colors are changed and all the original queries are kept, utilizing more reasoning steps can consistently improve robustness for GPT-4o $(+23.7\\%)$ and Gemini-2 $(+2.9\\%)$ .", + "bbox": [ + 169, + 224, + 826, + 338 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "3.2 Further Findings", + "text_level": 1, + "bbox": [ + 171, + 356, + 333, + 372 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Since color-related tasks often involve abstract reasoning, language comprehension, and contextual interpretation, it is essential to assess not just the vision encoder but also part of the language model, which plays a critical role in processing and understanding such tasks. To quantitatively analyze the correlation between VLM performances on color understanding tasks and their sizes, Spearman's rank correlation is calculated between VLM performances and (i) overall model sizes $(\\mathbf{L} + \\mathbf{V})$ , (ii) language model sizes $(\\mathbf{L})$ , and (iii) vision encoder sizes $(\\mathbf{V})$ . The correlation values and p-signs are presented in Table 2; a star is notated when the p-value of the correlation is lower than 0.05. It is observed that between the performances and language model", + "bbox": [ + 169, + 449, + 485, + 670 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg", + "image_caption": [ + "Finding 1. The scaling law still holds for color understanding, but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs.", + "Figure 4: The heatmaps related to performances and VLM sizes. Deeper color represents higher performance of P&R Overall Accuracy or Robustness. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance." + ], + "image_footnote": [], + "bbox": [ + 496, + 448, + 823, + 545 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "sizes, most of the tasks have a correlation greater than 0.5 and a p-value smaller than 0.05, except for Color Illusion and Color Blindness due to their special characteristics. Since the correlation between overall model sizes $(\\mathbf{L} + \\mathbf{V})$ and P&R Overall (0.7619), and Robustness (0.7390), we conclude that the color understanding, including Color Perception, Color Reasoning, and Color Robustness, still follows the scaling law of model sizes. Figure 4 presents the correlations between performances and model sizes in each model family. This visualization clearly shows the correlation between performances and model sizes; a larger model leads to higher performance within each model family.", + "bbox": [ + 169, + 670, + 826, + 768 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "However, between the performances and vision encoder sizes, most of the tasks either have a correlation lower than 0.5 or a p-value greater than 0.05, which is not sufficient to conclude with the evident positive correlation. Despite these findings, we try to avoid conveying the message that there is no positive correlation between performances and vision encoder sizes. We think it is because of the negligence of the current community to focus on the scaling laws of vision encoders. The vision encoders used in the current mainstream VLMs are constrained in a very small set: (i) most of the VLMs only use one type of vision encoders for the whole family, except for the InternVL2 and InternVL2.5 series; (ii) most of the VLMs use the vision encoder with the size of $300 - 400\\mathrm{M}$ . These challenges make it hard to evaluate the scaling laws of vision encoders. Further visualizations are presented in Appendix L.2.", + "bbox": [ + 169, + 772, + 826, + 912 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/2fb6cc2e270b95a95b8c9a9c926d3138f9663a31c52a053bf3bcde3d8f8a1c81.jpg", + "table_caption": [ + "Table 4: Adding reasoning steps can improve VLMs' performance on COLORBENCH. The change of accuracy brought by Chain of Thought (CoT) prompting on all tasks for GPT-4o and Gemini-2-flash. The last row presents the average improvement across both models." + ], + "table_footnote": [], + "table_body": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverallC'Robust
GPT-4o Δ+1.3+14.6+2.6+6.1+5.0-3.9+3.9-6.4+7.1+8.2+4.5+23.7
Gemini-2 Δ+2.6+4.1+1.3+11.1-2.0+9.8+3.9-3.2+2.8+10.4+4.2+2.9
Average Δ+1.95+9.35+1.95+8.60+1.50+2.95+3.9-4.80+4.95+9.30+4.35+13.30
", + "bbox": [ + 174, + 138, + 823, + 199 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "As shown in Table 3, we separate all the VLMs into several groups based on their sizes and present the best accuracy and the model name within each group. We can see that even the powerful proprietary models, GPT-4o and Gemini-2, can only reach an overall color perception and reasoning (P & R Overall) accuracy of $53.9\\%$ , only $+2.0\\%$ better than the best open-sourced model. Task-level results in Table 1 further reveal that these advanced proprietary models still exhibit substantial performance gaps compared to humans across most tasks. Moreover, the best model from group 1 has the accuracy of $41.5\\%$ (Cambrian-3B), which is only $10.4\\%$ lower than the best open-sourced", + "bbox": [ + 169, + 277, + 485, + 484 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3: The best model within each group and its performances (on P&R accuracy and Robustness). The absolute performances of different VLMs on COLORBENCH are relatively low, and the performance gaps between models are not large.", + "bbox": [ + 493, + 277, + 823, + 349 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/2022338c089cc9168d1bd7a010104472b5f57dfa0b5f37a9ac9f001bc1edc912.jpg", + "table_caption": [ + "Finding 2. The absolute performances of different VLMs are relatively low and lag behind those of humans. Moreover, the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs." + ], + "table_footnote": [], + "table_body": "
Color P & R OverallColor Robustness
Model SizeModelBestModelBest
<7BCambrian-3B41.5Qwen2.5-VL-3B63.7
7B-8BQwen2.5-VL-7B46.2Qwen2.5-VL-7B74.4
10B-30BInternVL2.5-26B46.8InternVL2.5-26B83.0
30B-50BInternVL2.5-38B50.0InternVL2.5-38B84.6
>70BLLava-OV-72B51.9InternVL2.5-78B86.2
ProprietaryGemini-255.4Gemini-270.7
ProprietaryGemini-2 (CoT)59.6Gemini-2 (CoT)73.6
", + "bbox": [ + 496, + 354, + 823, + 465 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "model. As for the robustness, the powerful proprietary models even show weaker robustness than the 7B model. Considering the lack of existing benchmarks specifically evaluating VLMs' color understanding capabilities, we conclude that this area is long-neglected by the community, and the open-sourced community is still on the same page with the proprietary model providers.", + "bbox": [ + 169, + 484, + 823, + 541 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Finding 3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community.", + "bbox": [ + 179, + 555, + 818, + 598 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The impact of using CoT prompting is shown in Table 4, in which we can see CoT improves the average P&R Overall accuracy across both models by $+4.35\\%$ , indicating that reasoning benefits these color-related tasks. Within the category of Color Perception, the improvements from CoT on Color Recognition and Object Recognition are quite limited as these tasks heavily rely on the vision encoder. Figure 59 and 60 in Appendix M illustrate that adding reasoning steps does not take effect since the initial visual perception and color identification are incorrect in the slow thinking process. However, to our surprise, we find that the Color Extraction task benefits extremely from more reasoning steps, although it seems only related to the vision encoder. After a thorough investigation, we observe that most of the current VLMs are not capable of directly extracting color values, so they need to use more reasoning steps to reach reasonable answers.", + "bbox": [ + 169, + 606, + 826, + 746 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Within the category of Color Reasoning, CoT benefits most of the tasks. However, in the Color Illusion task, CoT harms the model performance. After a manual investigation, we observe that more reasoning steps might cause VLMs to focus more on the misleading environments rather than directly compare the assigned colors, as shown in Figure 61. Another observation occurs in the Color Blindness task. Unlike other reasoning-related tasks, humans can read a color blindness test image with a simple glimpse without any slow thinking. This fascinating misalignment between humans and VLMs intrigues us to further investigation. We find that VLMs recognize these digits in a button-up pattern: they need to first infer that the dots in the image can form a digit before they really recognize these dots as digits.", + "bbox": [ + 169, + 752, + 826, + 878 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In addition, the consistent improvement of CoT on Color Robustness is also an unrevealed phenomenon. In our setting, only the colors of the image are altered, and the questions are strictly the", + "bbox": [ + 169, + 883, + 826, + 912 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "same as the original. Thus, under this circumstance, color is the only variant, which is supposed to be more related to the capability of the vision encoder. However, counterintuitively, as shown in our experiments, more reasoning steps make the VLMs more robust to the color changes, which is probably caused by the higher confidence of correct answers after reasoning.", + "bbox": [ + 169, + 90, + 823, + 148 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In order to examine whether VLMs really leverage color clues to handle tasks in COLORBENCH, experiments are conducted by converting all the original colorful images in the Color Perception and Reasoning categories into gray-scale ones, without changing the questions. Under this circumstance, the accuracies are expected to decrease dramatically as all our questions are related to colors. For quantitative analysis, we calculate the accuracy changing ratio as $(Acc_{ori} - Acc_{gray}) / Acc_{ori}$ for each VLM on each task. This value directly represents how the original accuracy changes with a gray-scale transformation. The positive value represents that the VLM has a higher accuracy on the original colored images, indicating that it needs color clues to solve the task. Higher positive values represent higher significance of the color clues. On the contrary, if the value is negative, it means that the VLM can reach a better accuracy after the gray-scale transformation, indicating that it does not need", + "bbox": [ + 169, + 214, + 486, + 517 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg", + "image_caption": [ + "Finding 4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLORBENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to wrong answers, and converting colorful images to grayscale can improve the accuracy.", + "Figure 5: The percentage of change in accuracy (y-axis) by converting colorful images to grayscale in each COLORBENCH task (x-axis). Each violin plot visualizes the distribution over all VLMs. Higher (lower) percentage indicates that VLMs rely more (less) on color clues for the task. Positive (negative) percentage indicates degradation (improvement) on grayscale images. Color clues are indeed more or less leveraged by VLMs in most tasks but they might mislead VLMs (illusion & mimicry)." + ], + "image_footnote": [], + "bbox": [ + 496, + 215, + 823, + 359 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "color clues for the task, and colors might even mislead VLM's judgment. Lower negative values represent the severe harm the color can have on the task.", + "bbox": [ + 169, + 517, + 823, + 546 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "The accuracy changing ratio distributions across all VLMs and tasks are presented in Figure 5 as the violin plot. As shown in the figure, for most of the tasks, the ratios of VLMs are above 0, indicating that VLMs indeed leverage color clues to correctly solve the tasks; removing the color directly harms the original accuracies dramatically. However, when it comes to Color Illusion and Color Mimicry, the majority of the changing ratios are below 0, which means that VLMs can get better accuracies when all the color information is removed. This phenomenon is reasonable as the colors on both of these two tasks are more likely serving as the misleading factors. In the meantime, for the Color Counting and Color Blindness tasks, almost half the accuracies increase and half decrease, indicating that the color clues might not be so significant in this task, thus, some of the models can find other ways to get the answer. We also investigate the correlation between accuracy changing ratios and model sizes, while no significant correlation can be concluded.", + "bbox": [ + 169, + 551, + 826, + 704 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4 Conclusion, Limitation, and Future Works", + "text_level": 1, + "bbox": [ + 169, + 723, + 563, + 739 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In this paper, we introduce COLORBENCH, the first benchmark designed to comprehensively evaluate the color understanding capabilities of VLMs, including Perception, Reasoning, and Robustness. After evaluating 32 widely used VLMs on our benchmark, several undiscovered observations are revealed by us. These observations emphasize the need for more sophisticated model architectures that integrate deeper color reasoning capabilities. To ensure high-quality and reliable annotations, COLORBENCH relies on manual data collection, annotation, and assessment across most domains. While this guarantees consistency, it inevitably limits dataset scale, style diversity, and category coverage. As future work, we aim to develop a trustworthy automated data collection pipeline and expand COLORBENCH to larger-scale, more diverse tasks involving complex interplays of color with texture, shape, and spatial relationships. Furthermore, investigating the impact of different visual encoders and language models could further elucidate the pathways through which VLMs process color information.", + "bbox": [ + 169, + 753, + 828, + 907 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 89, + 269, + 106 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Basit Alawode, Iyyakutti Iyappan Ganapathi, Sajid Javed, Naoufel Werghi, Mohammed Bennamoun, and Arif Mahmood. Aquaticclip: A vision-language foundation model for underwater scene analysis. arXiv preprint arXiv:2502.01785, 2025.", + "[2] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025.", + "[3] Jirayu Burapacheep, Ishan Gaur, Agam Bhatia, and Tristan Thrush. Colorswap: A color and word order dataset for multimodal evaluation. arXiv preprint arXiv:2402.04492, 2024.", + "[4] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330, 2024.", + "[5] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24185–24198, 2024.", + "[6] Kanjar De and Marius Pedersen. Impact of colour on robustness of deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 21-30, 2021.", + "[7] Google DeepMind. Gemini 2.0 flash, 2025.", + "[8] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4829-4837, 2016.", + "[9] Hao Fei, Yuan Yao, Zhuosheng Zhang, Fuxiao Liu, Ao Zhang, and Tat-Seng Chua. From multimodal llm to human-level ai: Modality, instruction, reasoning, efficiency and beyond. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 1-8, 2024.", + "[10] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024.", + "[11] Karl R. Gegenfurtner and Jochem Rieger. Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10(13):805-808, 2000.", + "[12] Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214, 2024.", + "[13] Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14375-14385, 2024.", + "[14] Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv preprint arXiv:2204.13653, 2022.", + "[15] Shuai He, Anlong Ming, Li Yaqi, Sun Jinyuan, Zheng ShunTian, and Ma Huadong. Thinking image color aesthetics assessment: Models, datasets and benchmarks. ICCV, 2023.", + "[16] Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, and Tae-Hyun Oh. Vlm's eye examination: Instruct and inspect visual competency of vision language models. arXiv preprint arXiv:2409.14759, 2024.", + "[17] Md Farhan Ishmam, Ishmam Tashdeed, Talukder Asir Saadat, Md Hamjajul Ashmafee, Abu Raihan Mostofa Kamal, and Md Azam Hossain. Visual robustness benchmark for visual question answering (vqa). arXiv preprint arXiv:2407.03386, 2024.", + "[18] Ali Jahanian, Shaiyan Keshvari, SVN Vishwanathan, and Jan P Allebach. Colors-messengers of concepts: Visual design mining for learning color semantics. ACM Transactions on Computer-Human Interaction (TOCHI), 24(1):1-39, 2017." + ], + "bbox": [ + 173, + 112, + 825, + 911 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[19] Johannes Jakubik, Benedikt Blumenstiel, and Clive Tinashe Marimo. Ms-clip: Multi-spectral vision language learning for earth observation. In American Geophysical Union Fall Meeting, 2024.", + "[20] Jayendra Kantipudi, Shiv Ram Dubey, and Soumendu Chakraborty. Color channel perturbation attacks for fooling convolutional neural networks and a defense against such attacks. IEEE Transactions on Artificial Intelligence, 1(2):181-191, 2020.", + "[21] Tony Lee, Haoqin Tu, Chi Heem Wong, Wenhao Zheng, Yiyang Zhou, Yifan Mai, Josselin Somerville Roberts, Michihiro Yasunaga, Huaxiu Yao, Cihang Xie, et al. Vhelm: A holistic evaluation of vision language models. arXiv preprint arXiv:2410.07112, 2024.", + "[22] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023.", + "[23] Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, and Deva Ramanan. Naturalbench: Evaluating vision-language models on natural adversarial samples. arXiv preprint arXiv:2410.14669, 2024.", + "[24] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer, 2024.", + "[25] Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, and Chengjie Wang. A survey on benchmarks of multimodal large language models, 2024.", + "[26] Ming Li, Chenguang Wang, Yijun Liang, Xiyao Wang, Yuhang Zhou, Xiyang Wu, Yuqing Zhang, Ruiyi Zhang, and Tianyi Zhou. Caughtcheating: Is your mllm a good cheating detective? exploring the boundary of visual perception and reasoning. arXiv preprint arXiv:2507.00045, 2025.", + "[27] Ming Li, Ruiyi Zhang, Jian Chen, Jiumiang Gu, Yufan Zhou, Franck Dernoncourt, Wanrong Zhu, Tianyi Zhou, and Tong Sun. Towards visual text grounding of multimodal large language model, 2025.", + "[28] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023.", + "[29] Zongxia Li, Xiyang Wu, Hongyang Du, Huy Nghiem, and Guangyao Shi. Benchmark evaluations, applications, and challenges of large vision language models: A survey. arXiv preprint arXiv:2501.02189, 2025.", + "[30] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll'ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014.", + "[31] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava- next: Improved reasoning,OCR,and world knowledge,2024.", + "[32] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision, pages 216-233. Springer, 2024.", + "[33] Lingjun Mao, Zineng Tang, and Alane Suhr. Evaluating model perception of color illusions in photorealistic scenes. arXiv preprint arXiv:2412.06184, 2024.", + "[34] Daniela Mapelli and Marlene Behrmann. The role of color in object recognition: Evidence from visual agnosia. Neurocase, 3(4):237-247, 1997.", + "[35] OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and etc. Gpt-4o system card, 2024.", + "[36] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015.", + "[37] Ragini Rathore, Zachary Leggon, Laurent Lessard, and Karen B Schloss. Estimating color-concept associations from image statistics. IEEE Transactions on Visualization and Computer Graphics, 26(1): 1226-1235, 2019." + ], + "bbox": [ + 173, + 90, + 825, + 910 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[38] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159, 2024.", + "[39] Ahnaf Mozib Samin, M Firoz Ahmed, and Md Mushtaq Shahriyar Rafee. Colorfoil: Investigating color blindness in large vision and language models. arXiv preprint arXiv:2405.11685, 2024.", + "[40] Haz Sameen Shahgir, Khondker Salman Sayeed, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yue Dong, and Rifat Shahriyar. Illusionvqa: A challenging optical illusion dataset for vision language models. arXiv preprint arXiv:2403.15952, 2024.", + "[41] Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, De-An Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, et al. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998, 2024.", + "[42] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024.", + "[43] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024.", + "[44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022.", + "[45] Hanna-Sophia Widhoelzl and Ece Takmaz. Decoding emotions in abstract art: Cognitive plausibility of clip in recognizing color-emotion associations. arXiv preprint arXiv:2405.06319, 2024.", + "[46] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023.", + "[47] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024.", + "[48] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.", + "[49] Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Mllms know where to look: Training-free perception of small visual details with multimodal llms. arXiv preprint arXiv:2502.17422, 2025.", + "[50] Le Zhang, Rabiul Awal, and Aishwarya Agrawal. Contrasting intra-modal and ranking cross-modal hard negatives to enhance visio-linguistic fine-grained understanding. arXiv preprint arXiv:2306.08832, 2023.", + "[51] Tiancheng Zhao, Tianqi Zhang, Mingwei Zhu, Haozhan Shen, Kyusong Lee, Xiaopeng Lu, and Jianwei Yin. VI-checklist: Evaluating pre-trained vision-language models with objects, attributes and relations. arXiv preprint arXiv:2207.00221, 2022.", + "[52] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017.", + "[53] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127:302-321, 2019." + ], + "bbox": [ + 173, + 90, + 826, + 809 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Table of Contents for Appendix", + "text_level": 1, + "bbox": [ + 171, + 89, + 439, + 108 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A Related Works 14", + "text_level": 1, + "bbox": [ + 173, + 125, + 825, + 138 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A.1 VLM Benchmarks 14", + "A.2 Color Evaluation 14" + ], + "bbox": [ + 196, + 143, + 825, + 180 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "B Data Sources 14", + "text_level": 1, + "bbox": [ + 173, + 199, + 825, + 213 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "C Detailed Generation Process for Robustness 15", + "text_level": 1, + "bbox": [ + 173, + 232, + 825, + 246 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "D COLORBENCH Categories and Questions 15", + "text_level": 1, + "bbox": [ + 173, + 266, + 825, + 280 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "E Implementation Details 19", + "text_level": 1, + "bbox": [ + 173, + 299, + 825, + 313 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "F Evaluation Prompts 19", + "text_level": 1, + "bbox": [ + 173, + 332, + 825, + 345 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "G Human Evaluation 19", + "text_level": 1, + "bbox": [ + 173, + 364, + 825, + 378 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "H Reasoning Models with Thinking Process 19", + "text_level": 1, + "bbox": [ + 173, + 398, + 825, + 412 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "I Qualitative Analysis of Failure Cases 20", + "text_level": 1, + "bbox": [ + 173, + 431, + 825, + 446 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "J Effect of Different Modalities 24", + "text_level": 1, + "bbox": [ + 173, + 465, + 825, + 479 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "K Fine-tuning Experiments on ColorBench 24", + "text_level": 1, + "bbox": [ + 173, + 498, + 825, + 513 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "L More Visualizations 25", + "text_level": 1, + "bbox": [ + 173, + 532, + 825, + 545 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "L.1 VLM Size & Model Performance for Each Task 25", + "L.2 Vision Size & Model Performance for Each Task 27", + "L.3 Performance for Each Model Family on Each Task 28" + ], + "bbox": [ + 197, + 551, + 825, + 608 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "M Samples Cases 30", + "text_level": 1, + "bbox": [ + 173, + 627, + 825, + 642 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "M.1 Effect of CoT 30", + "M.2 Effect of Grayscale 35", + "M.3 Failure with LLM and Vision 36", + "M.4 Easy Cases 37", + "M.5 Difficult Cases 39" + ], + "bbox": [ + 197, + 647, + 825, + 744 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A Related Works", + "text_level": 1, + "bbox": [ + 174, + 89, + 334, + 104 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.1 VLM Benchmarks", + "text_level": 1, + "bbox": [ + 174, + 121, + 346, + 136 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "With the rapid advancements in Vision-Language Models (VLMs) [9], numerous benchmarks have emerged to systematically evaluate VLM capabilities across diverse dimensions [29]. These benchmarks generally fall into two categories: text-centric and vision-centric evaluations, each designed to assess distinct multimodal competencies. Text-centric benchmarks primarily measure commonsense knowledge, reasoning, and complex problem-solving capabilities, exemplified by tasks in MMMU [47] and NaturalBench [23]. Conversely, vision-centric benchmarks focus on visual perception and reasoning (MMBench [32] and MME [10]), and robustness to visual perturbations (Grit [14] and Visual Robustness [17]). Furthermore, several benchmarks have extended their scope to evaluate specialized visual tasks, such as spatial relationship comprehension (SEED-Bench [22] and MM-Vet [46]), chart and map understanding (MMSTAR [4] and MuirBench [43]), visual grounding (Flickr30k [36] and TRIG [27]) and the detection and understanding of visual hallucinations (POPE [28] and HallusionBench [13]). However, despite the extensive scope covered by existing VLM benchmarks, none currently provide an integrated evaluation that simultaneously assesses visual perception, reasoning, and robustness within a unified framework. Moreover, although certain benchmarks [32, 10] have incorporated color-related questions, these have typically addressed basic color perception and recognition, neglecting deeper assessments of reasoning and robustness associated with color understanding.", + "bbox": [ + 174, + 148, + 826, + 383 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.2 Color Evaluation", + "text_level": 1, + "bbox": [ + 174, + 402, + 334, + 416 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Color understanding is increasingly recognized as a crucial aspect of Vision-Language Models' ability to perceive and interpret visual content. Limited studies have explored how color information influences model performance on specific tasks. Some studies [51, 50] explore the understanding of color by replacing color-related words in textual inputs to evaluate the models' ability to handle color-specific information. More recent research [16, 21] focuses on assessing fine-grained color discrimination by asking models to distinguish subtle color differences in visual inputs. Samin et al. [39] introduced color-related foils to test VLMs' capacity to cognize basic colors like red, white, and green, particularly in contexts requiring attention to subtle cues. Additionally, Burapacheep et al. [3] developed a benchmark dataset to evaluate and enhance compositional color comprehension in VLMs, emphasizing tasks where understanding minimal color relationships is essential. IllusionVQA [40] evaluates model perception of color illusions in photorealistic scenes. While these works have addressed isolated aspects of color understanding, none have provided a holistic assessment framework. In contrast to these previous works, our study establishes the first comprehensive and specialized benchmark for evaluating the color-related abilities of VLMs, offering a quantitative, automated approach to further this area of research.", + "bbox": [ + 174, + 429, + 826, + 635 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "B Data Sources", + "text_level": 1, + "bbox": [ + 174, + 659, + 318, + 674 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We conduct COLORBENCH from multiple sources, including website sources, publicly available benchmarks, and generated images. The detailed sources are included in Table 5.", + "bbox": [ + 174, + 691, + 821, + 719 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/2dd1bfc5751632f7ce11efe4e26cf20e287a4f3b05c3a4b28555ebfedf64c283.jpg", + "table_caption": [ + "Table 5: Data sources for each task." + ], + "table_footnote": [], + "table_body": "
CategoryData Source
C'RecognitionWebsite, ICAA17K [15]
C'RecognitionWebsite, ICAA17K [15]
C'ExtractionSynthetic Data
C'ProportionWebsite, Synthetic Data
C'ComparisonWebsite
C'CountingWebsite, Synthetic Data
C'OuntingWebsite, ADA20K [52, 53], COCO2017 [30]
C'MimicryWebsite, IllusionVQA[40], RCID[33]
C'BlindnessSynthetic Data
C'RobustCV-Bench[42]
", + "bbox": [ + 305, + 760, + 687, + 909 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 491, + 936, + 506, + 946 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/02f9a5ca0b385b537a0fcb5b31aec27978d46e340a9943aae0e3b963a4a2fd0c.jpg", + "table_caption": [ + "Table 6: Recoloring strategies." + ], + "table_footnote": [], + "table_body": "
StrategyEditing RegionPurpose
Entire ImageWhole imageAssesses the model's robustness to global color shifts
Target SegmentSegment containing the object referenced in the questionEvaluates the model's sensitivity to task-relevant color changes
Largest SegmentThe largest segment that is irrelevant to the questionTests whether changes in dominant but unrelated regions affect model predictions
", + "bbox": [ + 205, + 111, + 785, + 213 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C Detailed Generation Process for Robustness", + "text_level": 1, + "bbox": [ + 171, + 222, + 576, + 238 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "For the Color Robustness, we evaluate the consistency of VLMs when faced with instances that differ only in the color of the visual input. To systematically assess this effect, we define 3 recoloring strategies that determine which part of the image is altered: i) Target Segment, ii) Largest Segment, and iii) Entire Image. As mentioned in Table 6, Target Segment strategy recolors only the segment containing the object referenced in the question. This strategy ensures that the modification directly affects the model's perception of task-relevant content. Largest Segment strategy alters the color of the largest segment that is irrelevant to the question, testing whether models are distracted by dominant but unrelated visual changes. In contrast, Entire Image strategy applies a global color shift to evaluate the model's sensitivity to overall color variations. As summarized in Table 6, the first two strategies introduce localized modifications, while the third assesses robustness to broader image-wide color changes. Importantly, only color attributes are altered without modifying object shapes or contextual elements, which preserves the overall realism of the image. By incorporating both task-relevant and irrelevant edits, our benchmark provides a comprehensive evaluation of VLMs' ability to handle color perturbations across different contexts.", + "bbox": [ + 169, + 253, + 826, + 446 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "While generating color variations, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region. Once the editing region is determined, we modify the color of the corresponding region. In HSV color space, since Saturation and Value control the purity or brightness of the color, and only Hue controls the color of the part, we only adjust the Hue value in the HSV color space. Specifically, we shift the Hue by $90^{\\circ}$ , $180^{\\circ}$ , and $270^{\\circ}$ . These three values ensure that the color manipulations cover significant perceptual differences across the color spectrum. This process produces nine variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation. Additionally, we select questions that are color-invariant, which means answers remain valid regardless of whether the recoloring appears fully natural. This design choice isolates color variation as the sole variable of interest and prevents confounding effects from semantic or contextual changes. Through these steps, we evaluate whether VLMs rely excessively on color information and whether they maintain consistency in their predictions despite substantial color shifts.", + "bbox": [ + 169, + 453, + 826, + 688 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D COLORBENCH Categories and Questions", + "text_level": 1, + "bbox": [ + 171, + 708, + 558, + 724 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Table 7 provides a detailed description of each task, alongside representative figures and sample questions that effectively demonstrate the specific capabilities being tested. Cases are provided for each task in Figure 6 to 16.", + "bbox": [ + 169, + 738, + 825, + 781 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/c37486eaabc8fc97ed4c652a07c5ed8f34be28cbd367fb740ec38f9e3701d520.jpg", + "table_caption": [ + "Table 7: Task and question definition in COLORBENCH." + ], + "table_footnote": [], + "table_body": "
Task#Sample CaseDescriptionSample Questions
PerceptionColor Recognition76Figure 6Ask for the color of a specific object or determine if a particular color is present in the image.What is the color of object in this image? What color does not exist in this image?
Color Extraction96Figure 7Extract the color code value (e.g., RGB, HSV, or HEX) from a single color in the image.What is the HSV value of the given color in the image? What is the RGB value of the given color in the image?
Object Recognition77Figure 8Identify objects in the image that match a specified color noted in the text input.What object has a color of pink in this image?
ReasoningColor Proportion80Figure 9Estimate the relative area occupied by a specified color in the image.What is the dominant color in this image? What is the closest to the proportion of the red color in the image?
Color Comparison101Figure 10Distinguish among multiple colors present in the image to assess overall tones and shades.Which photo is warmer in overall color? Which object has a darker color in the image?
Color Counting102Figure 11Identify the number of unique colors present in the image.How many different colors are in this image?
Object Counting103Figure 12Count the number of objects of a specified color present in the image.How many objects with green color are in this image?
Color Illusion93Figure 13Assess and compare colors in potential illusionary settings within the image.Do two objects have the same color?
Color Mimicry70Figure 14Detect objects that are camouflaged within their surroundings, where color is a key deceptive element.How many animals are in this image?
Color Blindness157Figure 15Recognize numbers or text that are embedded in color patterns, often used in tests for color vision.What is the number in the center of the image?
", + "bbox": [ + 205, + 126, + 787, + 556 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Color Recognition", + "text_level": 1, + "bbox": [ + 210, + 602, + 331, + 616 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg", + "image_caption": [ + "Figure 6: Cases for Color Recognition Task." + ], + "image_footnote": [], + "bbox": [ + 207, + 625, + 316, + 705 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "What is the color of the banana in this", + "bbox": [ + 320, + 630, + 473, + 638 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "image?", + "bbox": [ + 320, + 641, + 351, + 650 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A: Red", + "bbox": [ + 320, + 654, + 348, + 662 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C:Yellow", + "bbox": [ + 320, + 665, + 356, + 672 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "E: None of the above", + "bbox": [ + 320, + 676, + 401, + 685 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ans: E", + "bbox": [ + 320, + 689, + 349, + 696 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "en", + "bbox": [ + 408, + 654, + 416, + 661 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "k", + "bbox": [ + 408, + 665, + 416, + 672 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "", + "bbox": [ + 401, + 676, + 408, + 685 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 491, + 640, + 612, + 698 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "What color does not exist in this image?", + "bbox": [ + 616, + 630, + 777, + 638 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A:Green", + "bbox": [ + 616, + 654, + 651, + 662 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C:Red", + "bbox": [ + 616, + 665, + 645, + 672 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 616, + 688, + 645, + 696 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B:White", + "bbox": [ + 674, + 654, + 710, + 662 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D: Black", + "bbox": [ + 676, + 665, + 709, + 672 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Color Extraction", + "text_level": 1, + "bbox": [ + 210, + 775, + 323, + 787 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 212, + 796, + 310, + 871 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "What is the HSV value of the given color in the image?", + "bbox": [ + 318, + 804, + 482, + 821 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A: [100, 51, 81]", + "bbox": [ + 318, + 821, + 379, + 829 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C: [331, 100, 100]", + "bbox": [ + 318, + 829, + 387, + 837 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 318, + 840, + 348, + 848 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B: [329, 98, 100]", + "bbox": [ + 408, + 823, + 473, + 830 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D:[329,100,100]", + "bbox": [ + 408, + 830, + 478, + 838 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg", + "image_caption": [ + "Figure 7: Cases for Color Extraction Task." + ], + "image_footnote": [], + "bbox": [ + 501, + 796, + 599, + 871 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Q: What is the HSV value of the given color in the image?", + "bbox": [ + 614, + 804, + 767, + 821 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A: [47, 62, 100]", + "bbox": [ + 616, + 821, + 676, + 829 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C: [45, 64, 100]", + "bbox": [ + 616, + 829, + 676, + 838 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B: [107, 16, 22]", + "bbox": [ + 707, + 823, + 764, + 830 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D: [45, 62, 100]", + "bbox": [ + 707, + 830, + 766, + 838 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 616, + 840, + 645, + 848 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 490, + 935, + 509, + 946 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Object Recognition", + "text_level": 1, + "bbox": [ + 210, + 97, + 336, + 111 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 130, + 313, + 180 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Which state does not have a color of pink in this image?", + "bbox": [ + 315, + 130, + 450, + 148 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A: Montana", + "bbox": [ + 316, + 148, + 361, + 156 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C: Michigan", + "bbox": [ + 316, + 157, + 362, + 164 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 316, + 165, + 344, + 172 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D:New York", + "bbox": [ + 374, + 157, + 423, + 164 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "", + "bbox": [ + 374, + 165, + 423, + 172 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg", + "image_caption": [ + "Figure 8: Cases for Object Recognition Task." + ], + "image_footnote": [], + "bbox": [ + 482, + 113, + 602, + 191 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Which object has a color of black in this image?", + "bbox": [ + 606, + 130, + 766, + 148 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A: Background B: Banana", + "bbox": [ + 606, + 148, + 707, + 156 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:Apple D:Orange", + "bbox": [ + 606, + 157, + 705, + 165 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 606, + 165, + 635, + 172 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Color Proportion", + "text_level": 1, + "bbox": [ + 210, + 233, + 325, + 247 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 260, + 316, + 327 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Which is the dominant color in", + "bbox": [ + 318, + 272, + 442, + 280 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "this painting?", + "bbox": [ + 318, + 281, + 375, + 287 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B:Yellow", + "bbox": [ + 379, + 287, + 415, + 295 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:Green", + "bbox": [ + 318, + 296, + 354, + 303 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D:Orange", + "bbox": [ + 379, + 296, + 419, + 303 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 318, + 303, + 349, + 310 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg", + "image_caption": [ + "Figure 9: Cases for Color Proportion Task." + ], + "image_footnote": [], + "bbox": [ + 488, + 244, + 609, + 339 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "What is closest to the proportion of the", + "bbox": [ + 612, + 272, + 769, + 280 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "color red in the image?", + "bbox": [ + 612, + 281, + 707, + 287 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:10% B:20%", + "bbox": [ + 614, + 289, + 700, + 296 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:30% D:40%", + "bbox": [ + 614, + 296, + 702, + 304 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 614, + 304, + 699, + 311 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Color Comparison", + "text_level": 1, + "bbox": [ + 210, + 382, + 334, + 396 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg", + "image_caption": [ + "Figure 10: Cases for Color Comparison Task." + ], + "image_footnote": [], + "bbox": [ + 207, + 404, + 316, + 460 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Which photo is warmer in overall color?", + "bbox": [ + 318, + 407, + 480, + 417 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A: The left one", + "bbox": [ + 318, + 433, + 375, + 441 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B: The right one", + "bbox": [ + 318, + 444, + 380, + 453 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 318, + 455, + 349, + 463 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 488, + 407, + 609, + 463 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Which dog has the darkest color in the", + "bbox": [ + 614, + 409, + 772, + 417 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "image?", + "bbox": [ + 614, + 420, + 647, + 429 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A: No.1", + "bbox": [ + 616, + 431, + 645, + 440 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B: No.4", + "bbox": [ + 681, + 433, + 710, + 440 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C.No.5", + "bbox": [ + 616, + 443, + 647, + 450 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D.No.3", + "bbox": [ + 681, + 443, + 710, + 450 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 616, + 455, + 645, + 463 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Color Counting", + "text_level": 1, + "bbox": [ + 209, + 513, + 315, + 527 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg", + "image_caption": [ + "Figure 11: Cases for Color Counting Task." + ], + "image_footnote": [], + "bbox": [ + 207, + 531, + 315, + 614 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "How many different colors of flowers are", + "bbox": [ + 316, + 542, + 483, + 551 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "in this image?", + "bbox": [ + 318, + 553, + 377, + 563 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:1", + "bbox": [ + 318, + 566, + 334, + 574 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B:2", + "bbox": [ + 385, + 566, + 403, + 574 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:3", + "bbox": [ + 318, + 578, + 334, + 585 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D:4", + "bbox": [ + 385, + 579, + 403, + 585 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 318, + 590, + 348, + 597 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 486, + 542, + 612, + 599 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "How many colors are there in this flag?", + "bbox": [ + 614, + 542, + 774, + 551 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:3", + "bbox": [ + 616, + 566, + 633, + 574 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B:4", + "bbox": [ + 663, + 566, + 679, + 574 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:5", + "bbox": [ + 616, + 578, + 633, + 585 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D:6", + "bbox": [ + 663, + 579, + 681, + 585 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 616, + 589, + 645, + 597 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Object Counting", + "text_level": 1, + "bbox": [ + 209, + 656, + 321, + 670 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg", + "image_caption": [ + "Figure 12: Cases for Object Counting Task." + ], + "image_footnote": [], + "bbox": [ + 205, + 686, + 313, + 734 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "How many striped animals can be seen in", + "bbox": [ + 318, + 676, + 486, + 685 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "this image?", + "bbox": [ + 318, + 686, + 367, + 696 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:12", + "bbox": [ + 318, + 698, + 341, + 705 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B:11", + "bbox": [ + 385, + 699, + 406, + 705 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C:13", + "bbox": [ + 318, + 709, + 341, + 715 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D:0", + "bbox": [ + 385, + 709, + 403, + 715 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "F:10", + "bbox": [ + 318, + 722, + 341, + 729 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans:C", + "bbox": [ + 318, + 732, + 348, + 741 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 488, + 686, + 609, + 739 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "How many green bananas can be seen in", + "bbox": [ + 614, + 676, + 782, + 685 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "this image?", + "bbox": [ + 616, + 686, + 663, + 696 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:6", + "bbox": [ + 616, + 699, + 635, + 705 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B:7", + "bbox": [ + 674, + 699, + 692, + 705 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C. 5", + "bbox": [ + 616, + 709, + 633, + 715 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D. 4", + "bbox": [ + 674, + 710, + 692, + 717 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "E. 0", + "bbox": [ + 616, + 722, + 633, + 729 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 616, + 732, + 645, + 741 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Color Illusion", + "text_level": 1, + "bbox": [ + 210, + 791, + 300, + 803 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg", + "image_caption": [ + "Figure 13: Cases for Color Illusion Task." + ], + "image_footnote": [], + "bbox": [ + 207, + 808, + 310, + 872 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Do the blocks labeled a and b have the same color/shade?", + "bbox": [ + 315, + 804, + 465, + 821 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A: No, a is darker.", + "bbox": [ + 316, + 821, + 380, + 828 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B: Hard to tell without more context", + "bbox": [ + 316, + 829, + 444, + 837 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C: Yes, one appears darker due to how our", + "bbox": [ + 316, + 838, + 470, + 845 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "eyes perceive shadows", + "bbox": [ + 316, + 847, + 401, + 854 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D: No, b is darker", + "bbox": [ + 316, + 856, + 380, + 864 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 316, + 864, + 344, + 872 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 482, + 816, + 598, + 867 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "What colors are the two pills?", + "bbox": [ + 602, + 800, + 718, + 808 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A:Cannot tell from this image, the colors seem to", + "bbox": [ + 602, + 809, + 781, + 816 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "be shifting?!", + "bbox": [ + 602, + 816, + 648, + 824 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "B: Both are the exact same shade of gray", + "bbox": [ + 602, + 825, + 753, + 834 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "C: The left one is bluish-gray and the right one is", + "bbox": [ + 602, + 835, + 777, + 842 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "reddish-grey", + "bbox": [ + 602, + 843, + 648, + 851 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D: The left one is reddish-gray and the right one is", + "bbox": [ + 602, + 852, + 782, + 859 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "bluish-grey", + "bbox": [ + 602, + 859, + 643, + 867 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Ans:B", + "bbox": [ + 602, + 869, + 630, + 876 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg", + "image_caption": [ + "Color Mimicry" + ], + "image_footnote": [ + "How many seahorses in this image?", + "A:0 B:1", + "C:3 D:5" + ], + "bbox": [ + 205, + 127, + 321, + 186 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 321, + 167, + 352, + 174 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg", + "image_caption": [ + "Figure 14: Cases for Color Mimicry Task." + ], + "image_footnote": [ + "How many leaves in this image?", + "A:1 B:2", + "C:3 D:0", + "Ans: D" + ], + "bbox": [ + 500, + 114, + 624, + 186 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg", + "image_caption": [ + "Color Blindness" + ], + "image_footnote": [ + "There are two strings in the image.", + "What are the strings in the center of", + "this image?" + ], + "bbox": [ + 214, + 261, + 310, + 335 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:kt B:la", + "C:lo D:It" + ], + "bbox": [ + 321, + 297, + 413, + 319 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 321, + 324, + 354, + 332 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg", + "image_caption": [ + "Figure 15: Cases for Color Blindness Task." + ], + "image_footnote": [ + "What is the number in the center of", + "this image?" + ], + "bbox": [ + 514, + 260, + 612, + 335 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:6 B:9", + "C:17 D:18", + "Ans: D" + ], + "bbox": [ + 630, + 299, + 725, + 332 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg", + "image_caption": [ + "Original Image" + ], + "image_footnote": [ + "Q: How many cars are in the image?" + ], + "bbox": [ + 315, + 407, + 395, + 457 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg", + "image_caption": [ + "Entire Image" + ], + "image_footnote": [], + "bbox": [ + 310, + 479, + 393, + 527 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 312, + 530, + 392, + 579 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 312, + 580, + 393, + 631 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg", + "image_caption": [ + "Original Image" + ], + "image_footnote": [], + "bbox": [ + 310, + 652, + 401, + 700 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg", + "image_caption": [ + "Entire Image" + ], + "image_footnote": [], + "bbox": [ + 305, + 726, + 400, + 773 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 305, + 777, + 400, + 825 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg", + "image_caption": [ + "Figure 16: Cases for Color Robustness Task." + ], + "image_footnote": [], + "bbox": [ + 307, + 828, + 401, + 876 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg", + "image_caption": [ + "GT: E", + "Recoloring Strategy", + "Targeted Segment" + ], + "image_footnote": [ + "A:8 B:7 C:6 D:5 E:4" + ], + "bbox": [ + 450, + 479, + 532, + 527 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 450, + 530, + 532, + 579 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 450, + 580, + 532, + 630 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg", + "image_caption": [ + "GT: C", + "Recoloring Strategy", + "Targeted Segment" + ], + "image_footnote": [], + "bbox": [ + 446, + 726, + 540, + 773 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 446, + 776, + 540, + 825 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 446, + 827, + 540, + 876 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg", + "image_caption": [ + "Largest Segment" + ], + "image_footnote": [], + "bbox": [ + 599, + 479, + 681, + 527 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 599, + 530, + 681, + 579 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg", + "image_caption": [], + "image_footnote": [ + "Q: How many curtains are in the image?", + "A:3 B:2 C:1 D:4 E:0" + ], + "bbox": [ + 599, + 580, + 681, + 631 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg", + "image_caption": [ + "Largest Segment" + ], + "image_footnote": [], + "bbox": [ + 591, + 724, + 687, + 773 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 591, + 777, + 687, + 825 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 591, + 827, + 687, + 876 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "E Implementation Details", + "text_level": 1, + "bbox": [ + 171, + 89, + 405, + 107 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "To further advance our understanding of VLMs' capabilities in color perception, reasoning, and robustness dimensions, we conduct an extensive evaluation of 32 vision-language models (VLMs) spanning a range of large language model (LLM) sizes and architectures. Our evaluation includes state-of-the-art models such as GPT-4o[35], Gemini-2-flash[7], LLaVA-OV[24], LLaVA-NEXT [31], Cambrian[42], InternVL2[5], InternVL2.5[5], Qwen2.5-VL[2], and Eagle[41]. GPT-4o and Gemini-2-flash are used with API calls. We further examine reasoning enhancement via chain-of-thought (CoT) prompting [44], applying it to GPT-4o and Gemini-2-Flash to evaluate how intermediate reasoning steps influence color understanding. Additionally, we include the most recent GPT-o3 on perception and reasoning tasks, which is the most powerful model with a long internal chain-of-thought process. This selection covers a diverse set of architectures, including both proprietary and open-source models, enabling a comprehensive assessment of their reasoning capabilities under different computational constraints.", + "bbox": [ + 169, + 119, + 826, + 286 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "To ensure a fair comparison, we standardize our experimental setup across models. Open-source models with fewer than 70B parameters are evaluated using a single NVIDIA A100 80GB GPU, while larger models require four NVIDIA A100 80GB GPUs to accommodate their increased memory and computational demands.", + "bbox": [ + 169, + 292, + 826, + 349 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "F Evaluation Prompts", + "text_level": 1, + "bbox": [ + 171, + 366, + 377, + 383 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Do not explain your reasoning. Answer with only the letter that corresponds to the correct option. Do not repeat the entire answer.", + "bbox": [ + 179, + 407, + 818, + 450 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "CoT Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Think step by step before answering. Then conclude with the letter that corresponds to the correct option. Make sure the option letter is in the parentheses like (X). Do not include ( or ) in the response except for the answer.", + "bbox": [ + 179, + 468, + 818, + 526 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "G Human Evaluation", + "text_level": 1, + "bbox": [ + 171, + 547, + 372, + 564 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "To assess the degree of alignment between VLMs and human color understanding, we selected a representative subset of COLORBENCH, focusing specifically on color perception and reasoning tasks. The Color Extraction task was excluded from human annotation, as humans are generally not sensitive to fine-grained differences in color codes. Three human participants were recruited, each tasked with completing 50 samples per category. All evaluators responded to the full set of multiple-choice and judgment-oriented questions. We then gathered all responses and conducted statistical analysis on the collected human evaluations.", + "bbox": [ + 169, + 577, + 826, + 676 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "H Reasoning Models with Thinking Process", + "text_level": 1, + "bbox": [ + 171, + 694, + 555, + 710 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "To comprehensively assess the performance of VLMs with the thinking process on COLORBENCH, except for proprietary models with chain-of-thought(CoT) prompt, we additionally conduct experiments with GPT-o3 on perception and reasoning tasks. GPT-o3 is the most recent powerful proprietary VLMs that is trained to think before answering with reinforcement learning. We use the API version of GPT-o3 (2025-04-16) for evaluation. The result is shown in Table 8, together with results of CoT prompting and human evaluation.", + "bbox": [ + 169, + 724, + 826, + 809 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The results presented in Table 8 indicate that human evaluators achieve the highest performance across the majority of tasks, except for three specific categories: Object Recognition (O'Recog), Color Proportion (C'Prop), and Color Comparison (C'Comp), where GPT-o3 holds the highest scores. The performance differences between GPT-o3 and human evaluators on O'Recog and C'Comp tasks are relatively minor (less than $3\\%$ ). However, GPT-o3 substantially outperforms both humans and other VLMs on the C'Prop task, with an advantage exceeding $12\\%$ . This significant gap on C'Prop aligns with expectations, as humans generally exhibit lower sensitivity to precise quantitative measures.", + "bbox": [ + 169, + 814, + 826, + 912 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Meanwhile, GPT-o3 benefits from including the capability to utilize analytical tools for precise image assessments and continuous exhaustive visual search [26] to obtain better proportion estimations.", + "bbox": [ + 169, + 90, + 823, + 119 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "On the remaining tasks, GPT-o3 consistently outperforms GPT-4o (CoT) and Gemini-2-flash (CoT), except for the Color Blindness (C'Blind) task, where GPT-o3 trails GPT-4o (CoT) by $3.7\\%$ . The C'Blind task requires VLMs to accurately identify numbers or strings in an image that is composed of colored dots. This task demands capabilities of precise color recognition combined with a holistic spatial perception. One plausible reason for GPT-o3's inferior performance is its longer and more complex reasoning path, which may lead to overthinking. This might cause the model to focus too much on local details or choices of tool, at the expense of the global and intuitive perception needed for this task.", + "bbox": [ + 169, + 126, + 826, + 238 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Overall, these findings highlight the relative strengths and weaknesses of current advanced VLMs compared to human evaluators. Importantly, there remains substantial room for improvement in VLM capabilities, as significant performance gaps persist between VLMs and humans, particularly in reasoning-intensive tasks.", + "bbox": [ + 169, + 242, + 826, + 299 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/6696a3e56dcd41106cc9520c97ca6ef997d92e3da4928d10da388f6eb66d04e7.jpg", + "table_caption": [ + "Table 8: Performance of proprietary reasoning models with thinking processes on Color Perception and Reasoning Tasks. Models are ranked based on their overall performance on color perception and reasoning (P & R Overall) tasks. The best-performing model within the VLM group is highlighted in bold. For human evaluation, any instance that exceeds the performance of all VLMs is also highlighted in bold." + ], + "table_footnote": [], + "table_body": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: Proprietary
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.4
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.6
GPT-o3 (API)84.257.292.271.682.246.145.658.180.063.166.4
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0-
", + "bbox": [ + 173, + 388, + 823, + 488 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "I Qualitative Analysis of Failure Cases", + "text_level": 1, + "bbox": [ + 171, + 515, + 511, + 532 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To gain deeper insights into VLM failures on color-related tasks, we conduct a detailed case analysis using Qwen2.5-VL-3B and 7B models on different tasks. Following the attention visualization methodology of Zhang et al. [49], we focus on instances where the 3B model fails but the 7B model succeeds, allowing a clearer examination of the underlying capability differences. The visualizations of attention maps are shown in Figure 17 to 25.", + "bbox": [ + 169, + 546, + 823, + 617 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "For Color Perception tasks, we analyze the Color Recognition and Object Recognition tasks (excluding Color Extraction, which contains single-color color images). Our preliminary findings show that only a small number of failures arise from incorrect object localization. In most cases, both models correctly attend to the relevant regions but still produce incorrect predictions. This indicates that VLMs cannot accurately interpret color information, rather than deficiencies in visual grounding for these basic perception tasks.", + "bbox": [ + 169, + 622, + 826, + 707 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "For Color Reasoning tasks, tasks such as Color Proportion, Color Comparison, Color Counting, and Color Illusion require integrating visual information across the entire image without a clear focus point. Attention maps show that both 3B and 7B models exhibit similar focus patterns but generate different answers, implying that the divergence mainly originates from the language reasoning component rather than the visual encoder. For tasks with explicit perception targets, including Object Counting, Color Mimicry, and Color Blindness, both models attend to the correct regions, yet the 3B model often fails to produce accurate predictions. These results reveal that current VLMs remain weak in color interpretability even when their attention is properly aligned.", + "bbox": [ + 169, + 710, + 826, + 824 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 935, + 509, + 948 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg", + "image_caption": [ + "What is the color of the banana in this" + ], + "image_footnote": [], + "bbox": [ + 346, + 95, + 460, + 184 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "image?", + "bbox": [ + 465, + 114, + 496, + 123 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "A: Red", + "bbox": [ + 465, + 127, + 496, + 135 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "B:Green", + "bbox": [ + 531, + 127, + 570, + 135 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "C:Yellow", + "bbox": [ + 465, + 138, + 504, + 147 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "D: Black", + "bbox": [ + 531, + 138, + 566, + 147 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "E: None of the above", + "bbox": [ + 465, + 152, + 553, + 160 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ans: E", + "bbox": [ + 465, + 164, + 496, + 172 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg", + "image_caption": [ + "Figure 17: Visualized Attention Maps for Color Recognition Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 188, + 727, + 338 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg", + "image_caption": [ + "What object has green color in this" + ], + "image_footnote": [], + "bbox": [ + 313, + 380, + 490, + 470 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "image?", + "bbox": [ + 493, + 401, + 529, + 411 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "A: Grass", + "bbox": [ + 493, + 414, + 531, + 422 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "B:Flower", + "bbox": [ + 560, + 412, + 602, + 421 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "C:Leaf", + "bbox": [ + 493, + 426, + 524, + 434 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "D: Fruit", + "bbox": [ + 560, + 426, + 593, + 434 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 493, + 439, + 524, + 446 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg", + "image_caption": [ + "Figure 18: Visualized Attention Maps for Object Recognition Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 474, + 727, + 592 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg", + "image_caption": [ + "What color in the pie chart has the", + "proportion closest to $25\\%$ ?" + ], + "image_footnote": [], + "bbox": [ + 349, + 641, + 450, + 720 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "A: Light blue B:Green", + "bbox": [ + 468, + 667, + 573, + 676 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "C: Purple D:Cyan", + "bbox": [ + 468, + 681, + 568, + 690 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 468, + 694, + 501, + 700 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg", + "image_caption": [ + "Figure 19: Visualized Attention Maps for Color Proportion Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 729, + 727, + 878 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 328, + 94, + 470, + 186 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Which lipstick in this image is the darkest", + "bbox": [ + 477, + 109, + 661, + 119 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "color?", + "bbox": [ + 478, + 122, + 509, + 131 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "A:ACAI", + "bbox": [ + 478, + 133, + 514, + 143 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "B: SANGRIA", + "bbox": [ + 576, + 135, + 630, + 143 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "C:PASSION RED", + "bbox": [ + 478, + 147, + 553, + 156 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "D: PINK CLAY", + "bbox": [ + 576, + 147, + 635, + 156 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 478, + 160, + 511, + 169 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 271, + 213, + 344, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 213, + 419, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg", + "image_caption": [ + "Qwen2.5-VL-3B" + ], + "image_footnote": [], + "bbox": [ + 423, + 213, + 496, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 213, + 573, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 213, + 648, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 213, + 725, + 260 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg", + "image_caption": [ + "Figure 20: Visualized Attention Maps for Color Comparison Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 282, + 343, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 282, + 419, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg", + "image_caption": [ + "Qwen2.5-VL-7B" + ], + "image_footnote": [], + "bbox": [ + 423, + 282, + 496, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 282, + 573, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 282, + 648, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 282, + 725, + 329 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 341, + 375, + 460, + 465 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "How many colors are used for arrows in", + "bbox": [ + 467, + 391, + 642, + 400 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "this image?", + "bbox": [ + 467, + 402, + 521, + 412 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "A:6 B:7", + "bbox": [ + 467, + 415, + 552, + 425 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "C:8 D:9", + "bbox": [ + 467, + 428, + 553, + 436 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 467, + 440, + 500, + 449 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 271, + 494, + 343, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 494, + 419, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg", + "image_caption": [ + "Owen2.5-VL-3B" + ], + "image_footnote": [], + "bbox": [ + 423, + 494, + 496, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 494, + 573, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 494, + 648, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 494, + 725, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg", + "image_caption": [ + "Figure 21: Visualized Attention Maps for Color Counting Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 570, + 343, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 570, + 419, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg", + "image_caption": [ + "Qwen2.5-VL-7B" + ], + "image_footnote": [], + "bbox": [ + 423, + 570, + 496, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 570, + 573, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 570, + 648, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 570, + 725, + 625 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 313, + 670, + 491, + 762 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "How many gray animals are in this", + "bbox": [ + 493, + 681, + 643, + 691 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "image?", + "bbox": [ + 493, + 694, + 529, + 703 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "A:5", + "bbox": [ + 493, + 705, + 511, + 715 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "B:6", + "bbox": [ + 560, + 707, + 578, + 715 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "C:4", + "bbox": [ + 493, + 718, + 511, + 727 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "D:3", + "bbox": [ + 527, + 718, + 545, + 727 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "E:7", + "bbox": [ + 558, + 718, + 576, + 727 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "", + "bbox": [ + 586, + 718, + 598, + 727 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 493, + 731, + 526, + 739 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 271, + 782, + 343, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 782, + 419, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg", + "image_caption": [ + "Qwen2.5-VL-3B" + ], + "image_footnote": [], + "bbox": [ + 423, + 782, + 496, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 782, + 573, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 782, + 648, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 782, + 725, + 820 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg", + "image_caption": [ + "Figure 22: Visualized Attention Maps for Object Counting Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 842, + 343, + 878 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 346, + 842, + 419, + 878 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg", + "image_caption": [ + "Qwen2.5-VL-7B" + ], + "image_footnote": [], + "bbox": [ + 423, + 842, + 496, + 878 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 842, + 573, + 878 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 842, + 648, + 878 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 653, + 842, + 725, + 878 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg", + "image_caption": [ + "Which circles has the darkest color? The circles are numbered left to right starting" + ], + "image_footnote": [ + "from 1.", + "A: All the same", + "C:2 D:3" + ], + "bbox": [ + 271, + 113, + 524, + 167 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg", + "image_caption": [ + "Figure 23: Visualized Attention Maps for Color Illusion Tasks." + ], + "image_footnote": [ + "B:1", + "Ans: A" + ], + "bbox": [ + 271, + 200, + 727, + 273 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg", + "image_caption": [ + "How many black sea snakes in this images?" + ], + "image_footnote": [ + "A:0 B:1", + "C:2 D:3", + "Ans: A" + ], + "bbox": [ + 297, + 343, + 506, + 435 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg", + "image_caption": [ + "Figure 24: Visualized Attention Maps for Color Mimicry Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 438, + 727, + 547 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg", + "image_caption": [ + "What is the number in the center of this" + ], + "image_footnote": [ + "image?", + "A:4 B:7", + "C:18 D:22", + "Ans: C" + ], + "bbox": [ + 343, + 617, + 455, + 705 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg", + "image_caption": [ + "Figure 25: Visualized Attention Maps for Color Blindness Tasks." + ], + "image_footnote": [], + "bbox": [ + 271, + 715, + 727, + 864 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "J Effect of Different Modalities", + "text_level": 1, + "bbox": [ + 171, + 89, + 450, + 107 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "To investigate the impact of color information, we compare model performance on RGB versus grayscale images, thereby isolating the role of color within the image modality. To further explore the contribution of the image modality, we also conduct experiments using textual input only (questions and answer choices), where the original input images are substituted with pure black images of identical dimensions.", + "bbox": [ + 169, + 119, + 826, + 189 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/d60ff358df2811d8830a0caebeed2f35e40a50d32131cd91bafe0c4f1c943739.jpg", + "table_caption": [ + "Table 9: Average Accuracy (\\%) across three input settings (Text-only, Grayscale+Text, RGB+Text) on Color Perception and Reasoning tasks." + ], + "table_footnote": [], + "table_body": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: < 7B
Text-only29.230.631.629.635.324.520.635.541.723.429.3
Gray+Text25.933.542.729.137.123.223.342.453.723.032.1
RGB+Text55.335.763.637.342.422.526.137.550.625.037.4
VLMs: 7B - 8B
Text-only23.735.432.320.629.718.419.336.736.921.126.7
Gray+Text25.235.746.027.841.322.227.548.258.723.634.2
RGB+Text60.442.473.041.849.122.732.741.550.023.441.1
VLMs: 10B - 30B
Text-only26.933.632.825.034.726.522.338.240.018.928.9
Gray+Text26.837.946.822.546.522.430.143.060.326.035.0
RGB+Text68.441.579.743.051.325.334.433.855.426.643.2
VLMs: 30B - 70B
Text-only28.936.531.816.329.015.416.342.733.615.925.6
Gray+Text28.742.151.226.349.924.325.648.865.122.736.7
RGB+Text73.448.881.649.555.224.737.336.161.125.546.2
VLMs: > 70B
Text-only26.047.435.720.936.921.624.035.833.921.829.8
Gray+Text25.340.954.625.351.021.828.644.654.326.136.1
RGB+Text73.454.782.545.662.426.739.633.953.929.647.6
", + "bbox": [ + 173, + 239, + 823, + 506 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Table 9 presents the average accuracy across models grouped by LLM size. The result demonstrates that removing the visual modality (text-only setting) leads to the lowest performance across the majority of tasks. The performance differences among the three input settings allow us to disentangle the impact of textual input, image context (excluding color), and color information itself.", + "bbox": [ + 169, + 518, + 823, + 575 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Notably, in tasks such as Color Recognition and Object Recognition, the performance gap between text-only and grayscale experiments is relatively small, whereas both are significantly outperformed by the RGB input setting. This suggests that color cues play a substantially more important role than either contextual visual or textual information in these tasks.", + "bbox": [ + 169, + 580, + 823, + 638 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "K Fine-tuning Experiments on ColorBench", + "text_level": 1, + "bbox": [ + 171, + 656, + 552, + 674 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "We conduct a series of fine-tuning experiments to investigate model adaptation on specialized color-centric tasks. These experiments leverage three synthetic datasets designed for Color Extraction, Color Illusion, and Color Blindness. Using our synthetic data generation pipeline, we curate dedicated training sets for this purpose, with sample counts summarized in Table 10.", + "bbox": [ + 169, + 686, + 826, + 743 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/5170edb4da81e1095363d9d239e153782c4a4ddd277014be36ab7a1d76040d6a.jpg", + "table_caption": [ + "Table 10: Number of synthetic samples generated for fine-tuning experiments." + ], + "table_footnote": [], + "table_body": "
TaskNumber of Samples
Color Extraction2400
Color Illusion2400
Color Blindness2280
", + "bbox": [ + 367, + 781, + 625, + 847 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "To systematically assess the influence of different model components, we perform a comprehensive ablation study on Qwen2.5-VL-3B and Qwen2.5-VL-7B with the following settings:", + "bbox": [ + 169, + 858, + 823, + 887 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "- MLP only", + "bbox": [ + 215, + 897, + 303, + 912 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 23 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Vision encoder only", + "- MLP + Vision encoder (jointly)", + "- LLM (LoRA) only", + "- LLM (LoRA) + MLP", + "- LLM (LoRA) + Vision encoder", + "- LLM (LoRA) + MLP + Vision encoder (jointly)" + ], + "bbox": [ + 215, + 90, + 547, + 272 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "For configurations involving the LLM, we adopt the LoRA approach to update a subset of its parameters, while the remaining modules are fully fine-tuned.", + "bbox": [ + 169, + 294, + 823, + 323 + ], + "page_idx": 24 + }, + { + "type": "table", + "img_path": "images/9b96657fefa1d52defb48a32a8eb92da5620c7813c002852c292ef28b297a613.jpg", + "table_caption": [ + "Table 11: Accuracy (%) of Qwen2.5-VL (3B and 7B) under different training strategies across ColorBench tasks. Bold numbers indicate the best results within each model group." + ], + "table_footnote": [], + "table_body": "
ModelTrainable ModulesColor PerceptionColor ReasoningP&R
LLM (LoRA)MLPVisionC'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
Qwen2.5-3B72.438.574.043.848.522.625.243.045.724.241.1
71.153.175.350.049.522.526.245.244.325.543.6
73.753.179.246.345.529.427.248.447.125.544.4
75.056.375.347.549.528.425.246.247.128.045.2
71.175.070.145.051.526.527.245.247.127.446.2
69.777.174.040.053.523.532.051.645.737.648.8
71.175.071.446.349.525.527.249.448.631.446.7
72.475.071.445.051.524.332.046.250.028.047.1
Qwen2.5-7B76.349.084.447.552.519.634.044.155.728.746.2
72.442.784.442.559.420.629.145.247.128.745.2
77.659.481.847.556.425.529.151.650.035.651.2
78.961.580.541.355.420.629.147.348.630.147.7
75.078.183.151.360.421.635.052.754.335.652.4
72.482.383.151.357.419.630.151.652.933.151.2
75.083.383.145.056.415.730.153.854.333.151.5
77.682.383.150.055.523.331.152.755.733.151.7
", + "bbox": [ + 174, + 393, + 823, + 545 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "The evaluation results with finetuned VLMs are shown in Table 11. Overall, models that include LoRA fine-tuning on the LLM component consistently outperform those without it, exhibiting a substantial improvement in overall accuracy. Importantly, the improvements are not confined to the directly targeted tasks (Color Extraction, Color Illusion, Color Blindness). These experiments show that fine-tuning the model on part of tasks also produces notable gains on some ancillary reasoning tasks, including Color Proportion, and Color Comparison.", + "bbox": [ + 169, + 574, + 823, + 657 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "However, the transfer of knowledge is not universally positive. Certain tasks demonstrated limited or even negative performance transfer, indicating that fine-tuning exclusively on specialized color objectives does not guarantee generalization across the full spectrum of color perception and reasoning. This finding underscores that while targeted training enhances specialized abilities, a balanced and robust performance profile necessitates the inclusion of more diverse data and training objectives.", + "bbox": [ + 169, + 664, + 826, + 736 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "L More Visualizations", + "text_level": 1, + "bbox": [ + 169, + 768, + 375, + 784 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "L.1 VLM Size & Model Performance for Each Task", + "text_level": 1, + "bbox": [ + 169, + 809, + 547, + 824 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Figure 26 to 35 present detailed correlations between the log-scaled sizes of VLM parameters and the performance metrics for each task of Perception and Reasoning Categories. Deeper color represents higher accuracy. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance.", + "bbox": [ + 169, + 842, + 826, + 912 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg", + "image_caption": [ + "Figure 26: Heatmap for Color Recognition." + ], + "image_footnote": [], + "bbox": [ + 173, + 87, + 486, + 234 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg", + "image_caption": [ + "Figure 27: Heatmap for Color Extraction." + ], + "image_footnote": [], + "bbox": [ + 509, + 87, + 823, + 234 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg", + "image_caption": [ + "Figure 28: Heatmap for Object Recognition." + ], + "image_footnote": [], + "bbox": [ + 173, + 304, + 486, + 450 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg", + "image_caption": [ + "Figure 29: Heatmap for Color Proportion." + ], + "image_footnote": [], + "bbox": [ + 509, + 304, + 823, + 450 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg", + "image_caption": [ + "Figure 30: Heatmap for Color Comparison." + ], + "image_footnote": [], + "bbox": [ + 173, + 521, + 486, + 669 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg", + "image_caption": [ + "Figure 31: Heatmap for Color Counting." + ], + "image_footnote": [], + "bbox": [ + 509, + 521, + 823, + 669 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg", + "image_caption": [ + "Figure 32: Heatmap for Object Counting." + ], + "image_footnote": [], + "bbox": [ + 173, + 739, + 485, + 885 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg", + "image_caption": [ + "Figure 33: Heatmap for Color Illusion." + ], + "image_footnote": [], + "bbox": [ + 509, + 739, + 823, + 885 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg", + "image_caption": [ + "Figure 34: Heatmap for Color Mimicry." + ], + "image_footnote": [], + "bbox": [ + 173, + 87, + 486, + 234 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg", + "image_caption": [ + "Figure 35: Heatmap for Color Blindness." + ], + "image_footnote": [], + "bbox": [ + 508, + 87, + 823, + 234 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "L.2 Vision Size & Model Performance for Each Task", + "text_level": 1, + "bbox": [ + 169, + 337, + 552, + 351 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "Figure 36 to 40 show detailed correlations between the log-scaled sizes of vision encoders and the performance metrics for each task of Perception and Reasoning Categories. Colors represent different model families. Models that have the same vision encoder sizes but with different LLM sizes are plotted as different points. Given that the majority of Vision-Language Models (VLMs) utilize a singular type of vision encoder, and that the sizes of these encoders generally range between 300-400M, it becomes challenging to assess the scaling effects within vision encoders.", + "bbox": [ + 169, + 387, + 823, + 472 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg", + "image_caption": [ + "Figure 36: The scatter plot for Color Recognition and Color Extraction." + ], + "image_footnote": [], + "bbox": [ + 171, + 540, + 486, + 686 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg", + "image_caption": [ + "Figure 37: The scatter plot for Object Recognition and Color Proportion." + ], + "image_footnote": [], + "bbox": [ + 506, + 540, + 821, + 686 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg", + "image_caption": [ + "Figure 38: The scatter plot for Color Comparison and Color Counting." + ], + "image_footnote": [], + "bbox": [ + 171, + 726, + 486, + 872 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg", + "image_caption": [ + "Figure 39: The scatter plot for Object Counting and Color Illusion." + ], + "image_footnote": [], + "bbox": [ + 506, + 726, + 821, + 872 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "27", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg", + "image_caption": [ + "Figure 40: The scatter plot for Color Mimicry and Color Blindness." + ], + "image_footnote": [], + "bbox": [ + 171, + 85, + 488, + 234 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "L.3 Performance for Each Model Family on Each Task", + "text_level": 1, + "bbox": [ + 169, + 349, + 568, + 364 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Figures 41 to 47 illustrate task performance across different models within the same model families. In general, models with more parameters tend to perform better on the majority of tasks.", + "bbox": [ + 169, + 398, + 826, + 429 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg", + "image_caption": [ + "Figure 41: Performance of LLaVA-OV models." + ], + "image_footnote": [], + "bbox": [ + 173, + 431, + 437, + 630 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg", + "image_caption": [ + "Figure 43: Performance of Cambrian models." + ], + "image_footnote": [], + "bbox": [ + 173, + 681, + 436, + 878 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg", + "image_caption": [ + "Figure 42: Performance of LLaVA-NEXT models." + ], + "image_footnote": [], + "bbox": [ + 560, + 439, + 823, + 630 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg", + "image_caption": [ + "Figure 44: Performance of Eagle models." + ], + "image_footnote": [], + "bbox": [ + 560, + 681, + 823, + 878 + ], + "page_idx": 27 + }, + { + "type": "page_number", + "text": "28", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg", + "image_caption": [ + "Figure 45: Performance of InternVL2 models." + ], + "image_footnote": [], + "bbox": [ + 173, + 85, + 436, + 282 + ], + "page_idx": 28 + }, + { + "type": "image", + "img_path": "images/de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg", + "image_caption": [ + "Figure 47: Performance of Qwen2.5 models." + ], + "image_footnote": [], + "bbox": [ + 173, + 323, + 436, + 521 + ], + "page_idx": 28 + }, + { + "type": "image", + "img_path": "images/5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg", + "image_caption": [ + "Figure 46: Performance of InternVL2.5 models." + ], + "image_footnote": [], + "bbox": [ + 560, + 87, + 823, + 282 + ], + "page_idx": 28 + }, + { + "type": "page_number", + "text": "29", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "M Samples Cases", + "text_level": 1, + "bbox": [ + 171, + 89, + 339, + 107 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "M.1 Effect of CoT", + "text_level": 1, + "bbox": [ + 171, + 127, + 316, + 142 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "In this section, we present cases that the answers are influenced by adding reasoning steps for each task. For most of the tasks in COLORBENCH, adding reasoning steps can significantly improve the model performances. The samples cases of Perception and Reasoning categories are shown in Figure 48 to Figure 57. Case for Robustness category is shown in Figure 58.", + "bbox": [ + 169, + 157, + 826, + 218 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Color Recognition", + "text_level": 1, + "bbox": [ + 264, + 237, + 395, + 252 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg", + "image_caption": [ + "Figure 48: Case with CoT for Color Recognition task." + ], + "image_footnote": [], + "bbox": [ + 192, + 263, + 282, + 316 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "What color does not exist in this image?", + "bbox": [ + 295, + 263, + 473, + 273 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "A:White B:Purple C:Yellow", + "bbox": [ + 297, + 276, + 418, + 284 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "D:Red E:No answer", + "bbox": [ + 297, + 289, + 392, + 297 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Ans: E", + "bbox": [ + 297, + 301, + 328, + 309 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 321, + 374, + 330 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT-40:A", + "bbox": [ + 305, + 340, + 352, + 349 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 289, + 359, + 370, + 368 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 378, + 214, + 386 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Let's analyze the image step by step to determine which color does not exist:", + "bbox": [ + 181, + 387, + 465, + 402 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. White - The background of the image is white.", + "2. Purple - The first figure on the left is purple.", + "3. Yellow - The figure near the front is yellow.", + "4. Red - The figure in the front right position is red.", + "All the colors listed appear in the image. Therefore, the answer is: (E) No answer" + ], + "bbox": [ + 181, + 402, + 473, + 453 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Color Extraction", + "text_level": 1, + "bbox": [ + 606, + 237, + 727, + 250 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 532, + 256, + 609, + 319 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "What is the HEX value of the given color in the image?", + "bbox": [ + 633, + 263, + 808, + 286 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A: #AD2B86 B: #AD3087", + "C:#AD2D8B D:#050405" + ], + "bbox": [ + 633, + 287, + 738, + 310 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 777, + 300, + 808, + 309 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 622, + 321, + 710, + 330 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT-40: B", + "bbox": [ + 643, + 340, + 689, + 349 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 625, + 359, + 705, + 368 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 517, + 378, + 550, + 386 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine the HEX value of the given color, I'll analyze the image and compare it to the provided options:", + "1. The color appears to be a shade of magenta or purple.", + "2. I will now compare it with the options:", + "-A#AD2B86", + "-(B)#AD3087", + "(C)#AD2D8B ", + "(D) #050405", + "The color matches option (A) #AD2B86", + "Therefore, the correct answer is (A)." + ], + "bbox": [ + 517, + 387, + 808, + 467 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Object Recognition", + "text_level": 1, + "bbox": [ + 261, + 549, + 398, + 564 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg", + "image_caption": [ + "Figure 50: Case with CoT for Object Recognition task." + ], + "image_footnote": [], + "bbox": [ + 194, + 569, + 272, + 630 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Which object has a color of yellow in", + "bbox": [ + 297, + 575, + 457, + 584 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "this image?", + "bbox": [ + 297, + 588, + 349, + 597 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "A:Apple B:Snowman", + "bbox": [ + 297, + 599, + 410, + 609 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "C: Cucumber D: Umbrella", + "bbox": [ + 297, + 612, + 408, + 621 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 439, + 613, + 472, + 621 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 633, + 374, + 642 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Gemini: D", + "bbox": [ + 307, + 652, + 352, + 661 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 289, + 671, + 370, + 679 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Gemini:", + "text_level": 1, + "bbox": [ + 181, + 690, + 215, + 696 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Let's analyze the image.", + "bbox": [ + 181, + 698, + 277, + 705 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(A) The apple has multiple colors like red, green, and yellow.", + "bbox": [ + 181, + 707, + 423, + 715 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(B) The snowman is white.", + "bbox": [ + 181, + 715, + 287, + 723 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(C) The cucumber is green.", + "bbox": [ + 181, + 723, + 287, + 729 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(D) The umbrella has multiple colors.", + "bbox": [ + 181, + 731, + 328, + 739 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Therefore, the correct option is apple, since it can have yellow color.", + "bbox": [ + 181, + 739, + 452, + 747 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(A)", + "bbox": [ + 181, + 748, + 194, + 756 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Color Proportion", + "text_level": 1, + "bbox": [ + 604, + 550, + 727, + 564 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg", + "image_caption": [ + "Figure 49: Case with CoT for Color Extraction task. Option backgrounds correspond to their color codes." + ], + "image_footnote": [], + "bbox": [ + 519, + 570, + 557, + 599 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg", + "image_caption": [ + "Figure 51: Case with CoT for Color Proportion task." + ], + "image_footnote": [], + "bbox": [ + 563, + 570, + 625, + 601 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "What is closest to the proportion of the color white in the flag?", + "bbox": [ + 633, + 575, + 803, + 597 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "A:5% B:10%", + "bbox": [ + 633, + 599, + 714, + 609 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "C:15% D:20%", + "bbox": [ + 633, + 612, + 715, + 621 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 777, + 613, + 808, + 621 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 622, + 633, + 710, + 642 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT40:C", + "bbox": [ + 643, + 652, + 687, + 660 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 625, + 671, + 705, + 679 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 517, + 690, + 550, + 699 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To find the proportion of the color white in the flag:", + "1. Analyze the Flag's Design: The flag has a white Scandinavian cross.", + "This cross consists of a vertical and horizontal bar.", + "2. Estimate the Total Area of the Flag: Assume the flag is divided into a grid of even squares, which makes it easier to estimate the area proportions visually.", + "3. Estimate the Area of the White Cross: The white cross occupies approximately one-fifth in width and one-third in height of the flag when considering both the horizontal and vertical bars. Since both bars overlap in the middle, the white area is slightly less than", + "a simple addition.", + "4. Calculate Approximate Proportion:", + "- The horizontal and vertical bars together approximately cover the central columns and rows. Since they overlap, a simple", + "estimation shows the white area as less than a third of the flag's total area.", + "- A reasonable estimation of the white area compared to the red areas suggests a proportion closer to $20\\%$ .", + "5. Conclusion: Based on the above estimations, a plausible estimation for the proportion of white in the flag is closest to $20\\%$ .", + "Therefore, the answer is (D) $20\\%$ ." + ], + "bbox": [ + 517, + 699, + 813, + 875 + ], + "page_idx": 29 + }, + { + "type": "page_number", + "text": "30", + "bbox": [ + 488, + 935, + 509, + 946 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Color Comparison", + "text_level": 1, + "bbox": [ + 263, + 108, + 395, + 123 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "From left to right, which strip has the third reddest color?", + "text_level": 1, + "bbox": [ + 181, + 154, + 431, + 162 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "A: The first one.", + "bbox": [ + 181, + 166, + 248, + 175 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "B: The second one.", + "bbox": [ + 264, + 166, + 343, + 174 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "C: The third one.", + "bbox": [ + 181, + 179, + 251, + 186 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "D: The fourth one", + "bbox": [ + 264, + 179, + 339, + 186 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 442, + 179, + 473, + 186 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 193, + 374, + 202 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT4o: A", + "text_level": 1, + "bbox": [ + 308, + 212, + 352, + 220 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 289, + 229, + 370, + 239 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 250, + 215, + 257 + ], + "page_idx": 30 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine which strip has the third reddest color, let's analyze each strip:", + "1. First strip: An orange color, which has some red.", + "2. Second strip: A cream color, which has little to no red.", + "3. Third strip: A light beige color, with a hint of red.", + "4. Fourth strip: A deep red color, with the most red.", + "Based on these observations, the order of redness from most to least is: 1. Fourth strip", + "2. First strip", + "3. Third strip", + "4. Second strip", + "Therefore, the strip with the third reddest color is the third strip. The correct option is (C)" + ], + "bbox": [ + 181, + 258, + 470, + 367 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Object Counting", + "text_level": 1, + "bbox": [ + 271, + 431, + 388, + 446 + ], + "page_idx": 30 + }, + { + "type": "image", + "img_path": "images/cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg", + "image_caption": [ + "Figure 52: Case with CoT for Color Comparison task." + ], + "image_footnote": [], + "bbox": [ + 196, + 450, + 276, + 512 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "How many green strawberries are on the tree in this image?", + "bbox": [ + 297, + 458, + 473, + 479 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "A:11 B:13 C:18", + "bbox": [ + 297, + 482, + 408, + 491 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "D:12 E:17", + "bbox": [ + 297, + 494, + 370, + 503 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 442, + 494, + 473, + 503 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 515, + 374, + 523 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT4o: B", + "bbox": [ + 307, + 534, + 351, + 542 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 289, + 551, + 370, + 561 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 571, + 215, + 579 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "To determine how many green strawberries are on the tree, let's count each visible green one.", + "bbox": [ + 181, + 580, + 462, + 597 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "1. Start from the left side of the tree and move toward the right, looking from top to bottom.", + "bbox": [ + 181, + 598, + 464, + 613 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "2. Carefully look for strawberries that are a solid green color.", + "bbox": [ + 181, + 614, + 423, + 623 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "By doing this, I can count the green strawberries on the tree.", + "bbox": [ + 181, + 623, + 423, + 631 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "After counting", + "bbox": [ + 181, + 632, + 236, + 638 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "There are 12 green strawberries.", + "bbox": [ + 181, + 640, + 313, + 647 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "The correct choice is (D) 12.", + "bbox": [ + 181, + 648, + 295, + 656 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Color Counting", + "text_level": 1, + "bbox": [ + 611, + 108, + 722, + 123 + ], + "page_idx": 30 + }, + { + "type": "image", + "img_path": "images/3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg", + "image_caption": [ + "Figure 53: Case with CoT for Color Counting task." + ], + "image_footnote": [], + "bbox": [ + 513, + 133, + 645, + 181 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "How many colors are there in this image?", + "bbox": [ + 651, + 135, + 799, + 157 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "A:5 B:6", + "bbox": [ + 651, + 159, + 712, + 167 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "C:7 D:8", + "bbox": [ + 651, + 172, + 712, + 181 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 772, + 172, + 803, + 181 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 622, + 193, + 710, + 202 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT4o: C", + "bbox": [ + 643, + 212, + 689, + 220 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 625, + 229, + 705, + 239 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 517, + 250, + 550, + 257 + ], + "page_idx": 30 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine the number of colors in the image, let's analyze the different areas:", + "1. Red - used for \"Homo sapiens\" areas.", + "2. Yellow - used for \"Homo neanderthalensis\" areas", + "3. Blue - used for the oceans and seas.", + "4. White - used for land areas not highlighted by other colors.", + "5. Black - used for text and numbers.", + "6. Brown - used for arrows indicating migration paths.", + "Thus, there are six colors in the image.", + "Therefore, the correct option is (B) 6." + ], + "bbox": [ + 517, + 258, + 812, + 340 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Color Illusion", + "text_level": 1, + "bbox": [ + 617, + 431, + 715, + 444 + ], + "page_idx": 30 + }, + { + "type": "image", + "img_path": "images/963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg", + "image_caption": [ + "Figure 54: Case with CoT for Object Counting task.", + "Figure 55: Case with CoT for Color Illusion task." + ], + "image_footnote": [], + "bbox": [ + 584, + 453, + 748, + 478 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Does the horizontal bar have a uniform color?", + "bbox": [ + 519, + 482, + 718, + 491 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "A:Hard to tell B:Yes C:No", + "bbox": [ + 517, + 494, + 663, + 503 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Ans:B", + "bbox": [ + 767, + 494, + 799, + 503 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 622, + 515, + 710, + 523 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT4o: C", + "bbox": [ + 643, + 534, + 689, + 542 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Answer After CoT:", + "bbox": [ + 625, + 551, + 705, + 561 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 517, + 571, + 550, + 580 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "First, observe the horizontal bar and its surroundings. Notice that the background is a gradient transitioning from dark brown on the left to light beige on the right.", + "bbox": [ + 517, + 580, + 792, + 606 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Now focus on the bar itself. To determine if it has a uniform color, look for any signs of gradient or color changes within the bar.", + "bbox": [ + 517, + 607, + 810, + 622 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Despite the gradient background, the bar remains a consistent color throughout its length because there are no visible changes or gradients within it.", + "bbox": [ + 517, + 623, + 790, + 647 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Therefore, the horizontal bar has a uniform color.", + "bbox": [ + 517, + 648, + 714, + 656 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Thus, the correct choice is (B) Yes.", + "bbox": [ + 517, + 657, + 658, + 664 + ], + "page_idx": 30 + }, + { + "type": "page_number", + "text": "31", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Color Mimicry", + "text_level": 1, + "bbox": [ + 277, + 108, + 382, + 123 + ], + "page_idx": 31 + }, + { + "type": "image", + "img_path": "images/6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg", + "image_caption": [ + "Figure 56: Case with CoT for Color Mimicry task." + ], + "image_footnote": [], + "bbox": [ + 183, + 128, + 289, + 189 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "How many leaves in this images?", + "bbox": [ + 295, + 133, + 444, + 145 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "A:1", + "bbox": [ + 295, + 147, + 315, + 156 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "B:2", + "bbox": [ + 339, + 147, + 357, + 155 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "D:0", + "bbox": [ + 339, + 160, + 357, + 167 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 433, + 160, + 464, + 167 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 193, + 374, + 202 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "GPT4o: B", + "text_level": 1, + "bbox": [ + 308, + 212, + 351, + 220 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 289, + 229, + 370, + 239 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 250, + 215, + 256 + ], + "page_idx": 31 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine the number of leaves in the image:", + "1. Examine the main elements visible, including the branches and objects attached.", + "2. Identify any elements resembling leaves in shape, texture, or placement.", + "3. Consider any object that might mimic the appearance of a leaf, such as a leaf-tailed gecko." + ], + "bbox": [ + 181, + 258, + 475, + 316 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Upon close analysis, the object resembling a leaf is actually a camouflaged animal.", + "bbox": [ + 181, + 316, + 428, + 333 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Therefore, no actual leaves are present in the image.", + "bbox": [ + 181, + 334, + 393, + 340 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "The correct option is (D) 0.", + "bbox": [ + 181, + 342, + 287, + 349 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Color Blindness", + "text_level": 1, + "bbox": [ + 606, + 109, + 718, + 121 + ], + "page_idx": 31 + }, + { + "type": "image", + "img_path": "images/657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg", + "image_caption": [ + "Figure 57: Case with CoT for Color Blindness task." + ], + "image_footnote": [], + "bbox": [ + 531, + 128, + 609, + 188 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "What is the number in the center of this image?", + "bbox": [ + 627, + 133, + 802, + 157 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "A:7", + "bbox": [ + 629, + 160, + 676, + 169 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "C:21", + "bbox": [ + 629, + 172, + 676, + 181 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 772, + 172, + 803, + 180 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 617, + 191, + 705, + 202 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "GPT4o: C", + "text_level": 1, + "bbox": [ + 640, + 212, + 684, + 220 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 622, + 229, + 702, + 239 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 513, + 250, + 547, + 256 + ], + "page_idx": 31 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine the number in the center of the image, follow these steps:", + "1. Focus on contrasting colors: The image contains a pattern of numbers made up of dots in different colors. Identify the colors forming the number.", + "2. Recognize patterns: Pay attention to the shape formed by these colors. 3. Identify the number: Once the shape is recognized, compare it with the given options." + ], + "bbox": [ + 513, + 258, + 808, + 316 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "The number in the center of the image is 57. Thus, the correct choice is: (D) 57", + "bbox": [ + 513, + 316, + 802, + 333 + ], + "page_idx": 31 + }, + { + "type": "page_number", + "text": "32", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 31 + }, + { + "type": "image", + "img_path": "images/1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg", + "image_caption": [ + "Figure 58: Case with CoT for Color Robustness task." + ], + "image_footnote": [], + "bbox": [ + 173, + 183, + 823, + 792 + ], + "page_idx": 32 + }, + { + "type": "page_number", + "text": "33", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "However, for Color Recognition and Object Recognition tasks, the improvement of involving slow thinking is limited, as these two tasks heavily rely on the accurate cognition of the vision encoder. The sample cases are shown in Figure 59 and 60. For Color Illusion task, adding reasoning steps causes the model to focus more on the misleading environment and the relationship between the environment and the foreground objects. This thinking negatively influences the model performance. A sample case is shown by Figure 61.", + "bbox": [ + 169, + 90, + 826, + 175 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Color Recognition", + "text_level": 1, + "bbox": [ + 264, + 196, + 393, + 212 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 183, + 218, + 287, + 279 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Which color does not exist on the trunk of the tree in this image?", + "bbox": [ + 295, + 224, + 468, + 246 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "A: Purple B: Green", + "bbox": [ + 295, + 250, + 388, + 258 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "C:White", + "bbox": [ + 297, + 262, + 334, + 268 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "D:Yellow", + "bbox": [ + 349, + 262, + 388, + 268 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 441, + 262, + 473, + 270 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 282, + 374, + 291 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT-40: B", + "bbox": [ + 305, + 301, + 352, + 310 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 289, + 320, + 370, + 329 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 339, + 215, + 345 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Let's examine the colors on the tree trunks:", + "bbox": [ + 179, + 348, + 354, + 356 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "- Purple: Visible on the tree trunks.", + "bbox": [ + 181, + 357, + 320, + 364 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "- Green: Visible on the tree trunks.", + "bbox": [ + 181, + 364, + 318, + 372 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "- White: Visible on the tree trunks.", + "bbox": [ + 181, + 373, + 321, + 380 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "- Yellow: Visible on the tree trunks.", + "bbox": [ + 181, + 381, + 318, + 388 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Since all the colors are present, there seems to be an error. Based on visible presence, each color mentioned is present. If missing one is necessary, no color listed fulfills that distinction.", + "bbox": [ + 181, + 390, + 460, + 414 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Thus, upon reevaluation, there is no correct answer because all listed colors exist on the trunks. However, for choice completion and oversight check, let's assume possibility and choose (C) White incorrectly, for inclusion error.", + "bbox": [ + 181, + 415, + 460, + 448 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Color Illusion", + "text_level": 1, + "bbox": [ + 281, + 512, + 379, + 523 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg", + "image_caption": [ + "Figure 59: Case that CoT results in an incorrect answer for Color Recognition task.", + "Figure 61: Case that CoT results in an incorrect answer for Color Illusion task." + ], + "image_footnote": [], + "bbox": [ + 254, + 532, + 405, + 556 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Which circles has the darkest color?", + "bbox": [ + 181, + 558, + 341, + 566 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "The circles are numbered left to right starting from 1.", + "bbox": [ + 181, + 570, + 411, + 579 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "A: All the same", + "bbox": [ + 181, + 583, + 245, + 590 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "B:1", + "bbox": [ + 264, + 583, + 279, + 590 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "C:2 D:3", + "bbox": [ + 300, + 583, + 352, + 590 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 442, + 583, + 473, + 590 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 285, + 595, + 374, + 604 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT40:D", + "bbox": [ + 308, + 614, + 351, + 623 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 289, + 633, + 370, + 642 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 181, + 652, + 215, + 659 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "To determine which circle has the darkest color, let's examine each one: 1. Circle 1: This circle is on the left side of the gradient and appears darker.", + "bbox": [ + 181, + 661, + 468, + 685 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "2. Circle 2: This circle is in the middle and appears lighter than Circle 1. \n3. Circle 3: This circle is on the right side of the gradient and appears lighter than Circle 1.", + "bbox": [ + 181, + 686, + 465, + 710 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Circle 1 has the darkest color. Therefore, the correct option is (B).", + "bbox": [ + 181, + 710, + 442, + 719 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Object Recognition", + "text_level": 1, + "bbox": [ + 596, + 196, + 733, + 212 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg", + "image_caption": [ + "Figure 60: Case that CoT results in an incorrect answer for Object Recognition task." + ], + "image_footnote": [], + "bbox": [ + 522, + 218, + 617, + 279 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Which country does not have a color of yellow in this image?", + "bbox": [ + 632, + 224, + 805, + 246 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "A: Russia B: India", + "bbox": [ + 633, + 250, + 730, + 258 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "C: Korean D: Korean", + "bbox": [ + 633, + 262, + 740, + 270 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 777, + 263, + 808, + 270 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer Before CoT:", + "bbox": [ + 622, + 282, + 710, + 291 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT4o: A", + "bbox": [ + 645, + 301, + 687, + 310 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Answer After CoT:", + "text_level": 1, + "bbox": [ + 625, + 320, + 705, + 329 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "GPT40:", + "text_level": 1, + "bbox": [ + 517, + 339, + 550, + 345 + ], + "page_idx": 33 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "To determine which country does not have the color yellow in this image, let's go through the options:", + "- (A) Russia: The color for Russia is blue.", + "- (B) India: The color for India is yellow.", + "- (C) Korean: This likely refers to North and South Korea, which are shown in orange and red respectively, not yellow.", + "- (D) Thailand: The color for Thailand is shown as yellow.", + "From this analysis, the country without yellow is Korea (North and South Korea).", + "The correct option is (C)." + ], + "bbox": [ + 517, + 348, + 808, + 431 + ], + "page_idx": 33 + }, + { + "type": "page_number", + "text": "34", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "M.2 Effect of Grayscale", + "text_level": 1, + "bbox": [ + 171, + 90, + 354, + 104 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "For most of the tasks in COLORBENCH, colors are critical clues for VLMs to generate the answers. We highlight these cases in Figure 62 to 69.", + "bbox": [ + 169, + 204, + 825, + 232 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "However, for Color Illusion and Color Mimicry tasks, color clues might mislead VLMs to wrong answers, as shown in Figure 70 and 71.", + "bbox": [ + 169, + 238, + 823, + 267 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg", + "image_caption": [ + "Figure 62: Color clues play as a critical role for Color Recognition task." + ], + "image_footnote": [], + "bbox": [ + 173, + 282, + 485, + 440 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg", + "image_caption": [ + "Figure 63: Color clues play as a critical role for Color Extraction task. Option backgrounds correspond to their color codes." + ], + "image_footnote": [], + "bbox": [ + 509, + 282, + 823, + 440 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg", + "image_caption": [ + "Figure 64: Color clues play as a critical role for Object Recognition task." + ], + "image_footnote": [], + "bbox": [ + 173, + 508, + 485, + 665 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg", + "image_caption": [ + "Figure 65: Color clues play as a critical role for Color Proportion task." + ], + "image_footnote": [], + "bbox": [ + 509, + 508, + 823, + 665 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg", + "image_caption": [ + "Figure 66: Color clues play as a critical role for Color Comparison task." + ], + "image_footnote": [], + "bbox": [ + 173, + 719, + 485, + 877 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg", + "image_caption": [ + "Figure 67: Color clues play as a critical role for Color Counting task." + ], + "image_footnote": [], + "bbox": [ + 509, + 719, + 823, + 877 + ], + "page_idx": 34 + }, + { + "type": "page_number", + "text": "35", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg", + "image_caption": [ + "Figure 68: Color clues play as a critical role for Object Counting task." + ], + "image_footnote": [], + "bbox": [ + 173, + 101, + 486, + 258 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg", + "image_caption": [ + "Figure 69: Color clues play as a critical role for Color Blindness task." + ], + "image_footnote": [], + "bbox": [ + 506, + 101, + 820, + 258 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg", + "image_caption": [ + "Figure 70: Color clues negatively affect VLMs prediction for Color Illusion task." + ], + "image_footnote": [], + "bbox": [ + 173, + 303, + 488, + 460 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg", + "image_caption": [ + "Figure 71: Color clues negatively affect VLMs prediction for Color Mimicry task." + ], + "image_footnote": [], + "bbox": [ + 506, + 303, + 820, + 460 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "M.3 Failure with LLM and Vision", + "text_level": 1, + "bbox": [ + 171, + 512, + 426, + 526 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "We present a representative failure case that highlights limitations in both the vision and language components of the model. As shown in Figure 72, the model fails to correctly interpret the visual content—it misidentifies the target colors by focusing on pink and purple flowers instead of red and yellow ones, indicating a vision encoder error. Furthermore, the language model compounds this mistake by generating an incorrect chain-of-thought reasoning and arriving at an erroneous answer based on the wrong color categories. This example underscores the necessity of evaluating both visual perception and language reasoning when diagnosing failure modes in vision-language models.", + "bbox": [ + 169, + 537, + 826, + 636 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg", + "image_caption": [ + "Figure 72: Case that model fails because of both vision encoder and language model." + ], + "image_footnote": [], + "bbox": [ + 341, + 648, + 655, + 872 + ], + "page_idx": 35 + }, + { + "type": "page_number", + "text": "36", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "We present samples cases that majority of VLMs reach the correct answers.", + "bbox": [ + 171, + 244, + 669, + 258 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Recognition", + "text_level": 1, + "bbox": [ + 264, + 282, + 395, + 297 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg", + "image_caption": [ + "Figure 73: Color Recognition case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 176, + 303, + 303, + 362 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "What color does not exist in this image?", + "bbox": [ + 310, + 308, + 455, + 330 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:Green B:White", + "C:Red D:Black" + ], + "bbox": [ + 313, + 333, + 406, + 354 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "100% (32/32) Models Correct", + "bbox": [ + 267, + 366, + 393, + 376 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Object Recognition", + "text_level": 1, + "bbox": [ + 261, + 455, + 398, + 472 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg", + "image_caption": [ + "Figure 75: Object Recognition case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 174, + 476, + 236, + 507 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 238, + 476, + 303, + 508 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Which object has a color of green in this image?", + "bbox": [ + 307, + 483, + 467, + 503 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:Flower B: Sky", + "C:Leave D:River" + ], + "bbox": [ + 308, + 508, + 403, + 529 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Correct", + "bbox": [ + 263, + 541, + 397, + 551 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Comparison", + "text_level": 1, + "bbox": [ + 264, + 617, + 395, + 633 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg", + "image_caption": [ + "Figure 77: Color Comparison case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 187, + 637, + 292, + 699 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Which image is cooler in overall color?", + "bbox": [ + 307, + 643, + 478, + 652 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A: The left one", + "B: The right one" + ], + "bbox": [ + 308, + 656, + 375, + 678 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "81.25% (26/32) Models Correct", + "bbox": [ + 263, + 702, + 397, + 712 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Mimicry", + "text_level": 1, + "bbox": [ + 276, + 777, + 383, + 792 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg", + "image_caption": [ + "Figure 79: Color Mimicry case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 184, + 797, + 297, + 854 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "How many frogs in this images?", + "bbox": [ + 307, + 803, + 450, + 814 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "A:", + "bbox": [ + 308, + 829, + 320, + 838 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "B:2", + "bbox": [ + 344, + 829, + 362, + 838 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "C:3", + "bbox": [ + 308, + 842, + 326, + 851 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "D:0", + "bbox": [ + 344, + 842, + 364, + 851 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 444, + 840, + 475, + 851 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Correct", + "bbox": [ + 263, + 862, + 397, + 872 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Extraction", + "text_level": 1, + "bbox": [ + 606, + 282, + 727, + 296 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg", + "image_caption": [ + "Figure 74: Color Extraction case that majority of VLMs provide correct results. Option backgrounds correspond to their color codes." + ], + "image_footnote": [], + "bbox": [ + 524, + 303, + 604, + 363 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "What is the RGB value of the given color in the image?", + "bbox": [ + 617, + 308, + 805, + 330 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A: [255, 0]", + "123] B:[255,5,134]", + "C: [255, C]", + "128] D: [130, 22, 121]", + "0,2" + ], + "bbox": [ + 619, + 333, + 759, + 354 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "[1]", + "bbox": [ + 751, + 345, + 763, + 354 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 777, + 345, + 808, + 354 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "100% (32/32) Models Correct", + "bbox": [ + 602, + 366, + 728, + 376 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Proportion", + "text_level": 1, + "bbox": [ + 604, + 455, + 727, + 472 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg", + "image_caption": [ + "Figure 76: Color Proportion case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 527, + 476, + 627, + 537 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Which is the dominant colors in this painting?", + "bbox": [ + 640, + 489, + 799, + 512 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "A:Warm B:Cool Ans:B", + "bbox": [ + 640, + 513, + 810, + 523 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "84.38% (27/32) Models Correct", + "bbox": [ + 599, + 541, + 733, + 551 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Object Counting", + "text_level": 1, + "bbox": [ + 607, + 617, + 727, + 633 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg", + "image_caption": [ + "Figure 78: Object Counting case that majority of VLMs provide correct results." + ], + "image_footnote": [], + "bbox": [ + 519, + 637, + 638, + 699 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "How many cows have white faces in this image?", + "bbox": [ + 643, + 643, + 803, + 665 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:3 B:5", + "C:2 D:4" + ], + "bbox": [ + 643, + 667, + 700, + 690 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Correct", + "bbox": [ + 599, + 700, + 733, + 710 + ], + "page_idx": 36 + }, + { + "type": "header", + "text": "M.4 Easy Cases", + "bbox": [ + 171, + 90, + 299, + 106 + ], + "page_idx": 36 + }, + { + "type": "page_number", + "text": "37", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Color Robustness", + "text_level": 1, + "bbox": [ + 267, + 108, + 392, + 121 + ], + "page_idx": 37 + }, + { + "type": "image", + "img_path": "images/5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 187, + 128, + 292, + 189 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "How many surfboards are in the image?", + "bbox": [ + 307, + 133, + 447, + 156 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "A:0 B:1", + "bbox": [ + 307, + 159, + 362, + 169 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "C:3 D:2", + "bbox": [ + 307, + 171, + 364, + 181 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 444, + 172, + 475, + 181 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "96.88% (31/32) Model Predictions Unchanged", + "bbox": [ + 230, + 191, + 428, + 203 + ], + "page_idx": 37 + }, + { + "type": "image", + "img_path": "images/3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg", + "image_caption": [ + "Figure 80: Color Robustness case that majority of VLMs provide unchanged results over color variations in images." + ], + "image_footnote": [], + "bbox": [ + 174, + 205, + 485, + 387 + ], + "page_idx": 37 + }, + { + "type": "page_number", + "text": "38", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "We present samples cases that majority of VLMs reach the incorrect answers.", + "bbox": [ + 171, + 244, + 679, + 258 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Recognition", + "text_level": 1, + "bbox": [ + 264, + 282, + 395, + 297 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg", + "image_caption": [ + "Figure 81: Color Recognition case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 184, + 301, + 284, + 364 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "What color of balloon is not present in this image?", + "bbox": [ + 312, + 308, + 470, + 330 + ], + "page_idx": 38 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A:Yellow B:Red", + "C:Green D:Orange" + ], + "bbox": [ + 313, + 333, + 413, + 354 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 441, + 347, + 473, + 354 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "81.25% (26/32) Models Incorrect", + "bbox": [ + 259, + 366, + 400, + 376 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Object Recognition", + "text_level": 1, + "bbox": [ + 261, + 455, + 398, + 470 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg", + "image_caption": [ + "Figure 83: Object Recognition case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 183, + 476, + 305, + 537 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Which state is not light pink in this image?", + "bbox": [ + 308, + 483, + 460, + 505 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A:ID B:OK", + "bbox": [ + 308, + 508, + 375, + 517 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "C:TX D:MO", + "bbox": [ + 308, + 521, + 380, + 529 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Ans: B", + "bbox": [ + 439, + 521, + 472, + 529 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Incorrect", + "bbox": [ + 259, + 541, + 398, + 551 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Comparison", + "text_level": 1, + "bbox": [ + 264, + 617, + 395, + 632 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg", + "image_caption": [ + "Figure 85: Color Comparison case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 187, + 638, + 290, + 698 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Which species of wood has the darkest", + "bbox": [ + 299, + 643, + 470, + 654 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "color overall in the image?", + "bbox": [ + 300, + 656, + 416, + 664 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A: Mohogany B: Maple", + "bbox": [ + 300, + 667, + 401, + 678 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "C: Cherry D: Black Walnut Ans:A", + "bbox": [ + 300, + 681, + 477, + 691 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Incorrect", + "bbox": [ + 259, + 702, + 400, + 712 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Object Counting", + "text_level": 1, + "bbox": [ + 269, + 777, + 390, + 792 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg", + "image_caption": [ + "Figure 87: Object Counting case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 174, + 797, + 321, + 854 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "How many people are wearing", + "bbox": [ + 328, + 804, + 462, + 814 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "red striped shirts in this image?", + "bbox": [ + 330, + 816, + 470, + 825 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A:10 B:15 C:12", + "bbox": [ + 330, + 828, + 437, + 838 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "D:14 E:13 Ans:B", + "bbox": [ + 331, + 842, + 477, + 851 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "84.38% (27/32) Models Incorrect", + "bbox": [ + 259, + 862, + 398, + 872 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Extraction", + "text_level": 1, + "bbox": [ + 606, + 282, + 727, + 296 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg", + "image_caption": [ + "Figure 82: Color Extraction case that majority of VLMs provide incorrect results. Option backgrounds correspond to their color codes." + ], + "image_footnote": [], + "bbox": [ + 521, + 303, + 599, + 363 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "What is the RGB value of the given color in the image?", + "bbox": [ + 606, + 308, + 810, + 330 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A: [121, 151, 181]", + "bbox": [ + 607, + 333, + 681, + 343 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "C: [123, 150, 181]", + "bbox": [ + 607, + 345, + 679, + 354 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "B: [55, 32, 102]", + "bbox": [ + 687, + 333, + 751, + 343 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "D: [119, 150, 181]", + "bbox": [ + 689, + 345, + 761, + 354 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 777, + 347, + 808, + 354 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "84.38% (27/32) Models Incorrect", + "bbox": [ + 596, + 366, + 736, + 376 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Proportion", + "text_level": 1, + "bbox": [ + 604, + 455, + 727, + 470 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg", + "image_caption": [ + "Figure 84: Color Proportion case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 537, + 479, + 609, + 535 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "What color in the pie chart has the proportion closest to $20\\%$ ?", + "bbox": [ + 640, + 484, + 790, + 507 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A: dark green B: purple", + "bbox": [ + 640, + 510, + 745, + 518 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "C:orange", + "bbox": [ + 642, + 522, + 709, + 531 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "D:light pink Ans:A", + "bbox": [ + 705, + 522, + 810, + 531 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "87.50% (28/32) Models Incorrect", + "bbox": [ + 596, + 541, + 736, + 551 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Counting", + "text_level": 1, + "bbox": [ + 611, + 617, + 723, + 633 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg", + "image_caption": [ + "Figure 86: Color Counting case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 537, + 637, + 617, + 699 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "How many colors are there in this image?", + "bbox": [ + 643, + 643, + 792, + 665 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A:10 B:11", + "bbox": [ + 643, + 667, + 709, + 676 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "C:12 D:13", + "bbox": [ + 643, + 681, + 710, + 690 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 779, + 681, + 813, + 690 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "81.25% (26/32) Models Incorrect", + "bbox": [ + 596, + 702, + 736, + 710 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Illusion", + "text_level": 1, + "bbox": [ + 617, + 777, + 715, + 791 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg", + "image_caption": [ + "Figure 88: Color Illusion case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 594, + 801, + 638, + 821 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Which circles has the darkest color? The circles are numbered left to right starting from 1.", + "bbox": [ + 514, + 821, + 815, + 845 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A: All the same B: 1 C: 2 D: 3", + "bbox": [ + 516, + 845, + 658, + 856 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Ans: A", + "bbox": [ + 781, + 848, + 813, + 856 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "84.38% (27/32) Models Incorrect", + "bbox": [ + 596, + 862, + 736, + 872 + ], + "page_idx": 38 + }, + { + "type": "header", + "text": "M.5 Difficult Cases", + "bbox": [ + 171, + 90, + 321, + 104 + ], + "page_idx": 38 + }, + { + "type": "page_number", + "text": "39", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Color Mimicry", + "text_level": 1, + "bbox": [ + 276, + 108, + 382, + 123 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg", + "image_caption": [ + "Figure 89: Color Mimicry case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 187, + 128, + 295, + 190 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "How many leaves in this images?", + "bbox": [ + 307, + 133, + 455, + 145 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "A:1 B:2", + "bbox": [ + 307, + 159, + 364, + 169 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "C:3 D:0", + "bbox": [ + 307, + 171, + 364, + 181 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 444, + 172, + 477, + 181 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "93.75% (30/32) Models Incorrect", + "bbox": [ + 259, + 193, + 401, + 202 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Color Robustness", + "text_level": 1, + "bbox": [ + 267, + 268, + 392, + 282 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 184, + 287, + 290, + 349 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "How many oranges are in the image?", + "bbox": [ + 307, + 295, + 470, + 305 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "A:3 B:2", + "bbox": [ + 307, + 320, + 364, + 329 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "C:0 D:1", + "bbox": [ + 307, + 332, + 364, + 340 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Ans: D", + "bbox": [ + 441, + 333, + 473, + 342 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "87.5% (28/32) Model Predictions Changed", + "bbox": [ + 238, + 353, + 419, + 363 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 367, + 279, + 430 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 279, + 367, + 380, + 430 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 380, + 367, + 483, + 430 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 430, + 279, + 488 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 279, + 430, + 380, + 488 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 380, + 430, + 483, + 488 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg", + "image_caption": [ + "Figure 91: Color Robustness case that majority of VLMs change the answers over color variations in images." + ], + "image_footnote": [], + "bbox": [ + 174, + 488, + 279, + 547 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 279, + 488, + 380, + 547 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 380, + 488, + 483, + 547 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Color Blindness", + "text_level": 1, + "bbox": [ + 609, + 109, + 723, + 122 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg", + "image_caption": [ + "Figure 90: Color Blindness case that majority of VLMs provide incorrect results." + ], + "image_footnote": [], + "bbox": [ + 534, + 128, + 609, + 186 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "What is the number in the center of", + "bbox": [ + 643, + 133, + 797, + 143 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "this image?", + "bbox": [ + 643, + 146, + 697, + 156 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "A:2", + "bbox": [ + 643, + 159, + 665, + 167 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "C:22", + "bbox": [ + 643, + 171, + 674, + 181 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "D:26", + "bbox": [ + 686, + 172, + 710, + 181 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Ans: C", + "bbox": [ + 779, + 172, + 812, + 181 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "87.50% (28/32) Models Incorrect", + "bbox": [ + 596, + 193, + 736, + 202 + ], + "page_idx": 39 + }, + { + "type": "page_number", + "text": "40", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 39 + } +] \ No newline at end of file diff --git a/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_model.json b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2d7e45b6d75ca01216ed6353f297792a3a487c09 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_model.json @@ -0,0 +1,12446 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.28, + 0.058, + 0.717 + ], + "angle": 270, + "content": "arXiv:2504.10514v3 [cs.CV] 8 Nov 2025" + }, + { + "type": "title", + "bbox": [ + 0.184, + 0.123, + 0.817, + 0.2 + ], + "angle": 0, + "content": "COLORBENCH: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness" + }, + { + "type": "text", + "bbox": [ + 0.232, + 0.251, + 0.766, + 0.279 + ], + "angle": 0, + "content": "Yijun Liang\\*, Ming Li\\*, Chenrui Fan, Ziyue Li, Dang Nguyen, Kwesi Cobbina Shweta Bhardwaj, Jiuhai Chen, Fuxiao Liu, Tianyi Zhou" + }, + { + "type": "text", + "bbox": [ + 0.376, + 0.28, + 0.623, + 0.294 + ], + "angle": 0, + "content": "University of Maryland, College Park" + }, + { + "type": "text", + "bbox": [ + 0.355, + 0.295, + 0.645, + 0.308 + ], + "angle": 0, + "content": "{yliang17,minglii,tianyi}@umd.edu" + }, + { + "type": "text", + "bbox": [ + 0.296, + 0.308, + 0.702, + 0.322 + ], + "angle": 0, + "content": "Project: https://github.com/tianyi-lab/ColorBench" + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.358, + 0.538, + 0.373 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.387, + 0.769, + 0.664 + ], + "angle": 0, + "content": "Color plays an important role in human perception and usually provides critical clues in visual reasoning. However, it is unclear whether and how vision-language models (VLMs) can perceive, understand, and leverage color as humans. This paper introduces \"COLORBENCH\", an innovative benchmark meticulously crafted to assess the capabilities of VLMs in color understanding, including color perception, reasoning, and robustness. By curating a suite of diverse test scenarios, with grounding in real applications, COLORBENCH evaluates how these models perceive colors, infer meanings from color-based cues, and maintain consistent performance under varying color transformations. Through an extensive evaluation of 32 VLMs with varying language models and vision encoders, our paper reveals some undiscovered findings: (i) The scaling law (larger models are better) still holds on COLORBENCH, while the language model plays a more important role than the vision encoder. (ii) However, the performance gaps across models are relatively small, indicating that color understanding has been largely neglected by existing VLMs. (iii) CoT reasoning improves color understanding accuracies and robustness, though they are vision-centric tasks. (iv) Color clues are indeed leveraged by VLMs on COLORBENCH but they can also mislead models in some tasks. These findings highlight the critical limitations of current VLMs and underscore the need to enhance color comprehension. Our COLORBENCH can serve as a foundational tool for advancing the study of human-level color understanding of multimodal AI." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.672, + 0.314, + 0.687 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.702, + 0.827, + 0.882 + ], + "angle": 0, + "content": "Color is widely recognized as a fundamental component of human visual perception [11, 34], playing a critical role and providing critical clues in object detection, scene interpretation, contextual understanding, planning, etc., across critical application scenarios such as scientific discovery, medical care, remote sensing, shopping, visualization, artwork interpretation, etc. For instance, [19] leverages spectral color signatures to distinguish vegetation, health, and water bodies in satellite imagery, and [1] utilizes sediment color patterns to detect marine ecosystems. These applications underscore how color-driven features play an important role in real-world scenarios. Moreover, colors can convey affective or semantic information beyond simply recognizing and naming colors since colors are highly correlated to other attributes or concepts and thus can provide key information to various downstream tasks that do not even directly ask about colors [18, 37, 45]. As modern vision-language models (VLMs) [12, 41, 48] continue to be deployed to increasingly diverse scenarios, color—an essential visual feature—plays a growing role in the processes of understanding and reasoning. It is essential to examine whether and how these models can understand and leverage color information" + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.889, + 0.48, + 0.904 + ], + "angle": 0, + "content": "*These authors contributed equally to this work." + }, + { + "type": "footer", + "bbox": [ + 0.172, + 0.923, + 0.828, + 0.937 + ], + "angle": 0, + "content": "39th Conference on Neural Information Processing Systems (NeurIPS 2025) Track on Datasets and Benchmarks." + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.089, + 0.319, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.324, + 0.089, + 0.673, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.68, + 0.089, + 0.811, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.434, + 0.828, + 0.5 + ], + "angle": 0, + "content": "Figure 1: Test samples from COLORBENCH. COLORBENCH evaluates VLMs across three core capabilities: Perception, Reasoning and Robustness. The benchmark comprises 11 tasks designed to assess fine-grained color understanding abilities and the effect of color on other reasoning skills, including counting, proportion calculation, and robustness estimation. With over 1,400 instances, COLORBENCH covers a wide range of real-world application scenarios, including painting analysis, test kit readings, shopping, satellite/wildlife image analysis, etc." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.505, + 0.825, + 0.548 + ], + "angle": 0, + "content": "as in human perception and reasoning, how color influences their overall perceptual and reasoning capabilities, and whether they can interpret visual illusions, resolve ambiguous cues, and maintain reliable performance under color variations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.553, + 0.827, + 0.693 + ], + "angle": 0, + "content": "However, existing benchmarks for VLMs mainly focus on tasks that may not heavily depend on color understanding or require color-centric reasoning, thereby overlooking nuanced color-related factors [25, 29]. Hence, there is a lack of benchmarks that systematically assess how well VLMs understand color when it serves as the main or distinguishing feature of a scene and key information to a task. Moreover, robustness to variations in color, such as recoloring and shifting hues, has also been largely neglected in the LLM era [6, 8, 20]. Consequently, it remains unclear whether VLMs can perceive and reason about color with human-like proficiency and to what extent their performance deteriorates under significant color perturbations. This shortfall underscores the need for a dedicated benchmark that comprehensively probes various facets of color comprehension in VLMs. A detailed discussion of related works is provided in Appendix A." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.698, + 0.827, + 0.852 + ], + "angle": 0, + "content": "To bridge this gap, we propose a novel benchmark, COLORBENCH, that aims at comprehensively evaluating VLMs on three core capabilities of color understanding: Color Perception, Color Reasoning, and Color Robustness. Color Perception examines VLMs' fundamental capability to correctly detect and interpret colors from inputs. Color Reasoning refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. Color Robustness assesses how consistently VLMs perform when an image's colors are altered, ensuring they maintain accurate predictions across different color variants of an image. Under these three core dimensions, 11 fine-grained tasks assessing different aspects of color understanding capabilities are formulated as shown in Figure 1, which not only shows test examples in COLORBENCH but also presents potential real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.856, + 0.827, + 0.913 + ], + "angle": 0, + "content": "By focusing on these facets, COLORBENCH offers a granular view of VLMs' capabilities in color understanding, aiming to illuminate both their strengths and shortcomings. We evaluate 32 widely used VLMs in our benchmark, ranging from open-source to proprietary models, from relatively small models (0.5B) to larger models (78B), and obtain some unrevealed observations." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.176 + ], + "angle": 0, + "content": "Main Contribution. We introduce \"COLORBENCH\", the first dedicated benchmark for assessing the color perception, reasoning, and robustness of VLMs. We develop an evaluation suite for 11 color-centric tasks, covering diverse application scenarios and practical challenges. Moreover, we report a fine-grained empirical evaluation of 32 state-of-the-art VLMs, which exposes their limitations in color understanding and offers novel insights for future research. Our key findings are highlighted in the following:" + }, + { + "type": "text", + "bbox": [ + 0.211, + 0.187, + 0.825, + 0.228 + ], + "angle": 0, + "content": "1. The scaling law still holds for color understanding but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.234, + 0.825, + 0.277 + ], + "angle": 0, + "content": "2. The absolute performances of different VLMs are relatively low, and the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.281, + 0.825, + 0.323 + ], + "angle": 0, + "content": "3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.328, + 0.825, + 0.371 + ], + "angle": 0, + "content": "4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLOR-BENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to give wrong answers, and converting colorful images into grayscale can improve the accuracy." + }, + { + "type": "list", + "bbox": [ + 0.209, + 0.187, + 0.825, + 0.371 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.39, + 0.449, + 0.406 + ], + "angle": 0, + "content": "2 COLORBENCH Construction" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.421, + 0.501, + 0.643 + ], + "angle": 0, + "content": "We present COLORBENCH, the first benchmark explicitly designed to comprehensively evaluate the color understanding capabilities of VLMs across three key dimensions: Color Perception, Color Reasoning, and Color Robustness. This benchmark consists of 1,448 instances and 5,814 image-text questions spanning 11 diverse tasks. For the Color Perception and Color Reasoning categories, each instance contains an image, a question, and multiple-choice (3 to 6) options, with only one correct answer. For Color Robustness, each instance consists of 10 multiple-choice image-text questions, including a seed image and 9 edited images with color changes. Given that color is a fundamental visual feature influencing most vision-related tasks, disentangling color under" + }, + { + "type": "image", + "bbox": [ + 0.509, + 0.414, + 0.825, + 0.601 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.606, + 0.825, + 0.634 + ], + "angle": 0, + "content": "Figure 2: Statistics of 3 categories and 11 tasks in COLORBENCH." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.643, + 0.827, + 0.685 + ], + "angle": 0, + "content": "standing from other general capabilities (e.g., object recognition, counting) is challenging. To address this, we design questions with explicit color constraints for Color Perception and Reasoning dimensions, enabling a focused evaluation of VLMs' perception and reasoning abilities in relation to color." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.701, + 0.287, + 0.716 + ], + "angle": 0, + "content": "2.1 Taxonomy" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.727, + 0.827, + 0.77 + ], + "angle": 0, + "content": "Motivated by the existing evaluation criteria from prior benchmarks and real-world application scenarios, we categorize the color understanding capability into 3 core dimensions and 11 detailed axes, as shown in Figure 1. The detailed question templates and sample cases are shown in Appendix D." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.784, + 0.345, + 0.8 + ], + "angle": 0, + "content": "2.1.1 Color Perception" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.808, + 0.826, + 0.851 + ], + "angle": 0, + "content": "This core dimension refers to the fundamental capability to correctly detect and interpret colors from inputs. We assess this capability through 3 key aspects: i) Color Recognition, ii) Color Extraction, and iii) Object Recognition." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.856, + 0.825, + 0.913 + ], + "angle": 0, + "content": "Color Recognition includes questions that either ask for the color of a given object or determine whether a specific color is present in the image. Color Extraction requires the model to extract the value of color code (e.g., RGB, HSV, or HEX) for a given single color image. This task measures the ability to perform fine-grained color retrieval from visual input. Object Recognition evaluates the" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.123 + ], + "angle": 0, + "content": "model's capability to identify objects that match a specified color described in the text input. These two tasks require VLMs to be able to detect and interpret the color in either the image or text input." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.134, + 0.345, + 0.15 + ], + "angle": 0, + "content": "2.1.2 Color Reasoning" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.157, + 0.828, + 0.215 + ], + "angle": 0, + "content": "This dimension refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. This category encapsulates 7 key aspects: i) Color Proportion, ii) Color Comparison, iii) Color Counting, iv) Object Counting, v) Color Illusion, vii) Color Mimicry and viii) Color Blindness." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.219, + 0.828, + 0.455 + ], + "angle": 0, + "content": "Color Proportion tests the model's capability to estimate the relative area occupied by a specific color. Questions in this task require both color perception and proportion calculation capabilities. Color Comparison requires the model to be able to distinguish among multiple colors in the image, assessing its sensitivity to hue, saturation, and brightness differences in visual input. Color Counting focuses on identifying the number of unique colors in the image, evaluating the model's perception and differentiation of distinct color variations, and counting ability. Object Counting extends this challenge by requiring the model to count objects that match a specific color pattern. This task requires an integration of object recognition and color perception. Color Illusion questions query VLMs to compare colors in potential illusionary environments. This task evaluates the model's ability to account for color-induced optical illusions. Color Mimicry challenges the model to detect objects camouflaged within their surroundings, where color serves as a misleading factor, requiring advanced pattern recognition and contextual reasoning. These two tasks both assess the model's ability to make correct predictions under the misleading of color-related information in visual input. Color Blindness, inspired by Ishihara tests, assesses the model's ability to recognize numbers or text embedded in color patterns, testing its understanding of shape-color relationships. These 7 tasks comprehensively assess the model's capacity for logical reasoning, spatial awareness, and adaptive interpretation of color-based visual cues." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.467, + 0.35, + 0.482 + ], + "angle": 0, + "content": "2.1.3 Color Robustness" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.491, + 0.488, + 0.699 + ], + "angle": 0, + "content": "Color Robustness assesses how consistently VLMs perform and whether they can consistently deliver accurate predictions under color variants of a given image. It involves measuring the stability of a VLM's responses when confronted with the same text input and a series of recolored images. To ensure that color does not influence the predictions, we select questions and corresponding answers that are independent of color attributes. Under these conditions, a robust model should produce unchanged predictions regardless of recoloring manipulation. Any variation in the model's responses is then used to quantify its susceptibility to color changes, providing a direct measure of robustness." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.714, + 0.317, + 0.728 + ], + "angle": 0, + "content": "2.2 Data Curation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.739, + 0.487, + 0.822 + ], + "angle": 0, + "content": "For most of the tasks in the category of Color Perception and Color Reasoning, we rely on human experts to manually collect images from multiple online benchmarks and websites. For the Color Proportion task, to ensure the correctness of the ground truth, an extra color extrac" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.822, + 0.827, + 0.879 + ], + "angle": 0, + "content": "tion tool is firstly utilized to obtain the color histogram of the image. Questions and options are then manually designed based on these color statistics. For tasks including Color Extraction, Color Blindness, and Color Illusion, testing images are generated by corresponding code programs to ensure the controllability of the questions and answers. The detailed data sources are shown in Appendix B." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.826, + 0.914 + ], + "angle": 0, + "content": "After the initial data is collected, additional filtering processes are conducted in a human-machine interactive process. We first conduct inference on a variety of VLMs and discard low-quality samples" + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.478, + 0.811, + 0.708 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.495, + 0.717, + 0.828, + 0.816 + ], + "angle": 0, + "content": "Figure 3: Generation Pipeline for Color Robustness. For each seed image, we apply 3 recoloring strategies (Entire Image, Target Segment, Largest Segment) to generate edited images. For each strategy, we change the color of the recoloring region via shifting the Hue values by \\(90^{\\circ}\\), \\(180^{\\circ}\\), or \\(270^{\\circ}\\) in HSV color space." + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.936, + 0.506, + 0.948 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.15 + ], + "angle": 0, + "content": "based on the GPT-4o prediction result and human evaluation. For synthesized data, similar processes are conducted, but with additional code (for generation) and image assessment. The above process is conducted in three rounds before the final benchmark instances are settled. This refinement process ensures COLORBENCH a rigorous and informative benchmark for assessing color-related understanding." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.154, + 0.827, + 0.334 + ], + "angle": 0, + "content": "For Color Robustness, we create evaluation instances by modifying images or specific regions through color changes. We define 3 recoloring strategies to determine the recoloring region: i) Entire Image, where the whole image is recolored; ii) Target Segment, where only the segment relevant to the question is altered; and iii) Largest Segment, where the largest region unrelated to the question is modified. Further details can be found in Appendix C. While generating color variants, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region and perform recoloring by shifting the Hue value in the HSV color space at three levels to cover entire color wheel: \\((90^{\\circ}, 180^{\\circ},\\) and \\(270^{\\circ})\\). This process produces 9 variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.349, + 0.347, + 0.363 + ], + "angle": 0, + "content": "2.3 Evaluation Metrics" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.375, + 0.825, + 0.418 + ], + "angle": 0, + "content": "For Perception and Reasoning, we use accuracy as the evaluation metric, as all tasks follow a multiple-choice format. Accuracy is computed per task and per category, representing the proportion of correctly answered questions." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.423, + 0.826, + 0.507 + ], + "angle": 0, + "content": "For Robustness, we evaluate a model's ability to maintain consistent accurate predictions under color variations. As detailed in Section 2.2, each seed image \\( I_{s} \\) is transformed into \\( n \\) recolored variants using recoloring strategies, while keeping the original question \\( q \\) unchanged. A model \\( \\mathcal{M} \\) is considered robust on a seed image \\( I_{s} \\) and corresponding question \\( q \\) if and only if it provides a correct prediction for \\( I_{s} \\) and maintains correct on all \\( n \\) recolored versions. To quantify robustness, we define the instance-level robustness metric \\( R(I_s,q)\\in \\{0,1\\} \\) and a model-level robustness metric \\( Robust_{\\mathcal{M}}\\in [0,1] \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.511, + 0.825, + 0.564 + ], + "angle": 0, + "content": "Instance-level Robustness. Let the recolored images be \\( I_1, \\dots, I_n \\) and the generation output of model for image \\( I_i \\) and question \\( q \\) is \\( \\mathcal{M}(I_i, q) \\). Define \\( c(\\mathcal{M}(I_i, q)) \\) as the model correctness: \\( c(\\mathcal{M}(I_i, q)) = 1 \\) if model result \\( \\mathcal{M}(I_i, q) \\) is correct, otherwise 0. The instance-level robustness metric \\( R(I_s, q) \\) for a seed image \\( I_s \\) and question \\( q \\) is defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.31, + 0.568, + 0.826, + 0.605 + ], + "angle": 0, + "content": "\\[\nR \\left(I _ {s}, q\\right) = \\left\\{ \\begin{array}{l l} 1 & \\text {i f} c \\left(\\mathcal {M} \\left(I _ {i}, q\\right)\\right) = c \\left(\\mathcal {M} \\left(I _ {s}, q\\right)\\right) = 1, \\forall i \\in [ n ] \\\\ 0 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.614, + 0.761, + 0.628 + ], + "angle": 0, + "content": "Overall Robustness. Let \\( S \\) be the set of seed images. We define model robustness to be:" + }, + { + "type": "equation", + "bbox": [ + 0.342, + 0.632, + 0.825, + 0.666 + ], + "angle": 0, + "content": "\\[\n\\operatorname {R o b u s t} _ {\\mathcal {M}} = \\frac {\\sum_ {I _ {s} \\in \\mathcal {S}} R \\left(I _ {s}\\right)}{| \\mathcal {S} |}, \\operatorname {R o b u s t} _ {\\mathcal {M}} \\in [ 0, 1 ] \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.67, + 0.825, + 0.699 + ], + "angle": 0, + "content": "Robust\\(_{\\mathcal{M}}\\) represents the proportion of seed images on which the model maintains correctness across all color variations. A model is more robust when Robust\\(_{\\mathcal{M}}\\) is higher." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.717, + 0.387, + 0.734 + ], + "angle": 0, + "content": "3 Experimental Results" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.747, + 0.307, + 0.761 + ], + "angle": 0, + "content": "3.1 Main Results" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.773, + 0.825, + 0.843 + ], + "angle": 0, + "content": "Table 1 presents the performances of a wide range of VLMs, along with human evaluation results on our COLORBENCH. Human participants achieve the highest performance on all evaluated tasks across all models. Among the models, overall accuracy generally increases with model size, with larger models tend to outperform smaller models, and the two proprietary models, GPT-4o and Gemini-2-flash, perform the best\\(^2\\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.848, + 0.827, + 0.877 + ], + "angle": 0, + "content": "Color Perception. In Color Recognition (C'Recog), most models perform well (above \\(60\\%\\)), indicating that this task is relatively basic for color perception. Gemini-2 with CoT obtains the" + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.885, + 0.825, + 0.913 + ], + "angle": 0, + "content": "To examine the upper limits of VLM capabilities and benchmark against human-level performance, we also assess performance GPT-o3 on perception and reasoning tasks. The result is shown in Appendix H." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.172, + 0.098, + 0.827, + 0.168 + ], + "angle": 0, + "content": "Table 1: Performance of 32 VLMs (grouped by size) and human performance on COLORBENCH. Models are ranked within each group according to their overall performance on Color Perception and Reasoning (P & R Overall) tasks. For human evaluation, Color Extraction task is excluded, as humans are not attuned to precise color code differences. The best performance in each VLM group is highlighted in bold. For human evaluation, any instance surpassing all VLMs is marked in bold." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.169, + 0.824, + 0.58 + ], + "angle": 0, + "content": "
Color PerceptionColor ReasoningP & RRobustness
C*RecogC*ExtractO*RecogC*PropC*CompC*CountO*CountC*IlluC*MimicC*BlindOverallC*Robust
VLMs: < 7B
LLaVA-OV-0.5B26.344.846.830.023.822.621.438.758.626.832.638.7
InternVL2-1B35.534.459.723.841.619.622.334.438.633.133.639.4
InternVL2-2B60.536.566.240.038.619.629.126.952.921.036.454.2
InternVL2.5-1B55.336.561.042.545.522.625.243.041.428.038.352.3
InternVL2.5-2B69.728.171.433.848.525.530.132.355.719.838.559.8
Qwen2.5-VL-3B72.438.574.043.848.522.625.243.045.724.241.163.7
Cambrian-3B67.131.366.247.550.525.529.144.161.422.341.559.0
VLMs: 7B - 8B
LLaVA-Next-v-7B29.038.557.121.334.723.525.238.741.417.831.252.1
LLaVA-Next-m-7B21.118.863.627.542.616.734.041.947.129.933.455.2
Eagle-X5-7B52.647.967.541.342.620.635.044.148.622.940.048.5
Cambrian-8B72.428.172.748.854.531.433.041.957.117.242.364.9
InternVL2-8B72.450.077.942.548.520.635.938.750.023.643.165.5
Eagle-X4-8B71.147.968.845.050.526.537.940.948.627.444.163.7
LLAVA-OV-7B71.153.181.852.553.519.626.248.448.623.644.774.0
InternVL2.5-8B77.647.983.150.062.425.533.034.452.919.845.269.8
Qwen2.5-VL-7B76.349.084.447.552.519.634.044.155.728.746.274.4
VLMs: 10B - 30B
LLaVA-Next-13B56.631.371.427.541.627.528.229.045.725.536.453.3
Cambrian-13B67.134.474.046.347.532.435.038.755.724.842.864.7
Eagle-X4-13B73.743.876.643.847.523.538.834.457.126.143.766.3
InternVL2-26B72.452.187.052.556.420.635.034.455.727.446.374.0
InternVL2.5-26B72.445.889.645.063.422.635.032.362.929.346.883.0
VLMs: 30B - 70B
Eagle-X5-34B79.027.180.548.848.523.535.937.660.025.543.467.1
Cambrian-34b75.057.377.950.046.522.632.037.664.324.245.367.7
InternVL2-40B72.452.183.151.361.419.635.934.458.621.045.678.7
LLAVA-Next-34b69.746.976.643.856.428.441.836.661.429.946.665.9
InternVL2.5-38B71.160.489.653.863.429.440.834.461.426.850.084.6
VLMs: > 70B
InternVL2-76B72.442.785.745.062.427.535.031.250.023.644.668.6
LLAVA-Next-72B72.454.279.241.349.524.535.933.348.634.445.266.5
InternVL2.5-78B75.058.381.843.868.327.536.934.461.428.748.886.2
LLAVA-OV-72B73.763.583.152.569.327.550.536.655.731.951.980.3
VLMs: Proprietary
GPT-4o76.340.680.538.366.330.429.150.570.058.652.946.2
Gemini-2-flash80.352.187.046.970.333.334.944.172.949.655.470.7
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.469.9
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.673.6
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0--
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.588, + 0.825, + 0.673 + ], + "angle": 0, + "content": "highest performance. In Color Extraction (C'Extra), to our surprise, the two powerful proprietary models without CoT prompting only reach the middle-tier performances, indicating the potential limitation on the color perception of their vision encoders. Similar to the Color Existence task, almost all the models perform well in Object Recognition (O'Recog), and the 2 proprietary models do not reach the top. This is probably due to the strong alignment between this task and the common training recipe, which includes abundant general object detection images." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.677, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Color Reasoning. In Color Proportion (C'Prop), even the best model, Gemini-2 with CoT, can only reach \\(58.0\\%\\) of the accuracy, which is almost only slightly better than random guessing, showcasing the supreme difficulty of this task. In Color Comparison (C'Comp), larger models perform better in this task, and the proprietary models with CoT reach the top performance unsurprisingly. Surprisingly, in Color Counting (C'Count), all models show extremely poor performances. The highest performance comes from Gemini-2 with CoT, exceeding the second place by 10 percent, although its performance is also unsatisfactory at only \\(43.1\\%\\). In Object Counting (O'Count), surpassing the 2 proprietary models, LLaVA-OV-72B reaches the top and becomes the only model that exceeds \\(50\\%\\) of the accuracy. Similar to the findings from the Object Recognition task, this might be caused by the extremely adequate object detection tasks in open-sourced training recipes. In Color Illusion (C'Ilu), the accuracies of most models lie in the range of \\(30\\%\\) to \\(50\\%\\), and GPT-4o without CoT is the only one that exceeds \\(50\\%\\) of the accuracy. In Color Mimicry (C'Mimic), the 2 proprietary models reach the top, while more reasoning steps do not benefit a lot. In Color Blindness (C'Blind), most of the open-sourced models present accuracies under \\(30\\%\\). Considering the extremely practical usage of this scenario, we think the current community should pay more attention to this. Moreover, we also observe that, surprisingly, more reasoning steps benefit VLMs in the color blindness test, although it seems like a pure color perception task." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.089, + 0.825, + 0.145 + ], + "angle": 0, + "content": "Table 2: Spearman's rank correlation between VLM performance and different model parts' sizes on each task. L denotes the language model part's size and V represents the vision encoder part's size. We use “(*)” to mark correlations with p-values \\(\\leq 0.05\\). It shows that the scaling law still holds for color understanding but it is much weaker." + }, + { + "type": "table", + "bbox": [ + 0.175, + 0.153, + 0.825, + 0.21 + ], + "angle": 0, + "content": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'I'lluC'MimicC'BlindOverallC'Robust
L+V0.5657 (*)0.5255 (*)0.7107 (*)0.5125 (*)0.6358 (*)0.4316 (*)0.7566 (*)-0.34600.4832 (*)0.24600.7619 (*)0.7386 (*)
L0.5724 (*)0.4937 (*)0.6769 (*)0.4696 (*)0.6118 (*)0.4408 (*)0.7611 (*)-0.3697 (*)0.4559 (*)0.28240.7436 (*)0.7123 (*)
V0.3955 (*)0.28560.5465 (*)0.6242 (*)0.5295 (*)0.20890.3608-0.01270.6024 (*)-0.06790.5271 (*)0.5623 (*)
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.226, + 0.827, + 0.339 + ], + "angle": 0, + "content": "Color Robustness. In Color Robustness (C'Robust), a higher value represents better robustness towards color alteration. The only 4 models that exceed \\(80\\%\\) are LLaVA-OV-72B, InternVL2.5-26B, InternVL2.5-38B, and InternVL2.5-78B, which utilize relatively larger vision encoders, InternViT-6B, compared with others (mostly only 300-400M). In the meantime, GPT-4o has a really low robustness \\((46.2\\%)\\) to colors, indicating its vulnerable sensitivity to color changes, while Gemini-2 shows promising robustness \\((70.7\\%)\\) towards colors. Moreover, another surprising observation is that even though only the colors are changed and all the original queries are kept, utilizing more reasoning steps can consistently improve robustness for GPT-4o \\((+23.7\\%)\\) and Gemini-2 \\((+2.9\\%)\\)." + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.357, + 0.334, + 0.373 + ], + "angle": 0, + "content": "3.2 Further Findings" + }, + { + "type": "image_caption", + "bbox": [ + 0.18, + 0.388, + 0.818, + 0.432 + ], + "angle": 0, + "content": "Finding 1. The scaling law still holds for color understanding, but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.45, + 0.486, + 0.671 + ], + "angle": 0, + "content": "Since color-related tasks often involve abstract reasoning, language comprehension, and contextual interpretation, it is essential to assess not just the vision encoder but also part of the language model, which plays a critical role in processing and understanding such tasks. To quantitatively analyze the correlation between VLM performances on color understanding tasks and their sizes, Spearman's rank correlation is calculated between VLM performances and (i) overall model sizes \\((\\mathbf{L} + \\mathbf{V})\\), (ii) language model sizes \\((\\mathbf{L})\\), and (iii) vision encoder sizes \\((\\mathbf{V})\\). The correlation values and p-signs are presented in Table 2; a star is notated when the p-value of the correlation is lower than 0.05. It is observed that between the performances and language model" + }, + { + "type": "image", + "bbox": [ + 0.497, + 0.449, + 0.825, + 0.546 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.495, + 0.554, + 0.827, + 0.665 + ], + "angle": 0, + "content": "Figure 4: The heatmaps related to performances and VLM sizes. Deeper color represents higher performance of P&R Overall Accuracy or Robustness. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.671, + 0.827, + 0.769 + ], + "angle": 0, + "content": "sizes, most of the tasks have a correlation greater than 0.5 and a p-value smaller than 0.05, except for Color Illusion and Color Blindness due to their special characteristics. Since the correlation between overall model sizes \\((\\mathbf{L} + \\mathbf{V})\\) and P&R Overall (0.7619), and Robustness (0.7390), we conclude that the color understanding, including Color Perception, Color Reasoning, and Color Robustness, still follows the scaling law of model sizes. Figure 4 presents the correlations between performances and model sizes in each model family. This visualization clearly shows the correlation between performances and model sizes; a larger model leads to higher performance within each model family." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.773, + 0.827, + 0.913 + ], + "angle": 0, + "content": "However, between the performances and vision encoder sizes, most of the tasks either have a correlation lower than 0.5 or a p-value greater than 0.05, which is not sufficient to conclude with the evident positive correlation. Despite these findings, we try to avoid conveying the message that there is no positive correlation between performances and vision encoder sizes. We think it is because of the negligence of the current community to focus on the scaling laws of vision encoders. The vision encoders used in the current mainstream VLMs are constrained in a very small set: (i) most of the VLMs only use one type of vision encoders for the whole family, except for the InternVL2 and InternVL2.5 series; (ii) most of the VLMs use the vision encoder with the size of \\(300 - 400\\mathrm{M}\\). These challenges make it hard to evaluate the scaling laws of vision encoders. Further visualizations are presented in Appendix L.2." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.089, + 0.825, + 0.133 + ], + "angle": 0, + "content": "Table 4: Adding reasoning steps can improve VLMs' performance on COLORBENCH. The change of accuracy brought by Chain of Thought (CoT) prompting on all tasks for GPT-4o and Gemini-2-flash. The last row presents the average improvement across both models." + }, + { + "type": "table", + "bbox": [ + 0.175, + 0.139, + 0.825, + 0.2 + ], + "angle": 0, + "content": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverallC'Robust
GPT-4o Δ+1.3+14.6+2.6+6.1+5.0-3.9+3.9-6.4+7.1+8.2+4.5+23.7
Gemini-2 Δ+2.6+4.1+1.3+11.1-2.0+9.8+3.9-3.2+2.8+10.4+4.2+2.9
Average Δ+1.95+9.35+1.95+8.60+1.50+2.95+3.9-4.80+4.95+9.30+4.35+13.30
" + }, + { + "type": "table_caption", + "bbox": [ + 0.18, + 0.211, + 0.819, + 0.268 + ], + "angle": 0, + "content": "Finding 2. The absolute performances of different VLMs are relatively low and lag behind those of humans. Moreover, the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.278, + 0.486, + 0.485 + ], + "angle": 0, + "content": "As shown in Table 3, we separate all the VLMs into several groups based on their sizes and present the best accuracy and the model name within each group. We can see that even the powerful proprietary models, GPT-4o and Gemini-2, can only reach an overall color perception and reasoning (P & R Overall) accuracy of \\(53.9\\%\\), only \\(+2.0\\%\\) better than the best open-sourced model. Task-level results in Table 1 further reveal that these advanced proprietary models still exhibit substantial performance gaps compared to humans across most tasks. Moreover, the best model from group 1 has the accuracy of \\(41.5\\%\\) (Cambrian-3B), which is only \\(10.4\\%\\) lower than the best open-sourced" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.278, + 0.825, + 0.35 + ], + "angle": 0, + "content": "Table 3: The best model within each group and its performances (on P&R accuracy and Robustness). The absolute performances of different VLMs on COLORBENCH are relatively low, and the performance gaps between models are not large." + }, + { + "type": "table", + "bbox": [ + 0.498, + 0.356, + 0.825, + 0.466 + ], + "angle": 0, + "content": "
Color P & R OverallColor Robustness
Model SizeModelBestModelBest
<7BCambrian-3B41.5Qwen2.5-VL-3B63.7
7B-8BQwen2.5-VL-7B46.2Qwen2.5-VL-7B74.4
10B-30BInternVL2.5-26B46.8InternVL2.5-26B83.0
30B-50BInternVL2.5-38B50.0InternVL2.5-38B84.6
>70BLLava-OV-72B51.9InternVL2.5-78B86.2
ProprietaryGemini-255.4Gemini-270.7
ProprietaryGemini-2 (CoT)59.6Gemini-2 (CoT)73.6
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.485, + 0.825, + 0.542 + ], + "angle": 0, + "content": "model. As for the robustness, the powerful proprietary models even show weaker robustness than the 7B model. Considering the lack of existing benchmarks specifically evaluating VLMs' color understanding capabilities, we conclude that this area is long-neglected by the community, and the open-sourced community is still on the same page with the proprietary model providers." + }, + { + "type": "table_caption", + "bbox": [ + 0.18, + 0.556, + 0.819, + 0.599 + ], + "angle": 0, + "content": "Finding 3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.607, + 0.827, + 0.747 + ], + "angle": 0, + "content": "The impact of using CoT prompting is shown in Table 4, in which we can see CoT improves the average P&R Overall accuracy across both models by \\(+4.35\\%\\), indicating that reasoning benefits these color-related tasks. Within the category of Color Perception, the improvements from CoT on Color Recognition and Object Recognition are quite limited as these tasks heavily rely on the vision encoder. Figure 59 and 60 in Appendix M illustrate that adding reasoning steps does not take effect since the initial visual perception and color identification are incorrect in the slow thinking process. However, to our surprise, we find that the Color Extraction task benefits extremely from more reasoning steps, although it seems only related to the vision encoder. After a thorough investigation, we observe that most of the current VLMs are not capable of directly extracting color values, so they need to use more reasoning steps to reach reasonable answers." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.753, + 0.827, + 0.879 + ], + "angle": 0, + "content": "Within the category of Color Reasoning, CoT benefits most of the tasks. However, in the Color Illusion task, CoT harms the model performance. After a manual investigation, we observe that more reasoning steps might cause VLMs to focus more on the misleading environments rather than directly compare the assigned colors, as shown in Figure 61. Another observation occurs in the Color Blindness task. Unlike other reasoning-related tasks, humans can read a color blindness test image with a simple glimpse without any slow thinking. This fascinating misalignment between humans and VLMs intrigues us to further investigation. We find that VLMs recognize these digits in a button-up pattern: they need to first infer that the dots in the image can form a digit before they really recognize these dots as digits." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.827, + 0.913 + ], + "angle": 0, + "content": "In addition, the consistent improvement of CoT on Color Robustness is also an unrevealed phenomenon. In our setting, only the colors of the image are altered, and the questions are strictly the" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.15 + ], + "angle": 0, + "content": "same as the original. Thus, under this circumstance, color is the only variant, which is supposed to be more related to the capability of the vision encoder. However, counterintuitively, as shown in our experiments, more reasoning steps make the VLMs more robust to the color changes, which is probably caused by the higher confidence of correct answers after reasoning." + }, + { + "type": "image_caption", + "bbox": [ + 0.18, + 0.163, + 0.819, + 0.207 + ], + "angle": 0, + "content": "Finding 4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLORBENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to wrong answers, and converting colorful images to grayscale can improve the accuracy." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.215, + 0.487, + 0.518 + ], + "angle": 0, + "content": "In order to examine whether VLMs really leverage color clues to handle tasks in COLORBENCH, experiments are conducted by converting all the original colorful images in the Color Perception and Reasoning categories into gray-scale ones, without changing the questions. Under this circumstance, the accuracies are expected to decrease dramatically as all our questions are related to colors. For quantitative analysis, we calculate the accuracy changing ratio as \\((Acc_{ori} - Acc_{gray}) / Acc_{ori}\\) for each VLM on each task. This value directly represents how the original accuracy changes with a gray-scale transformation. The positive value represents that the VLM has a higher accuracy on the original colored images, indicating that it needs color clues to solve the task. Higher positive values represent higher significance of the color clues. On the contrary, if the value is negative, it means that the VLM can reach a better accuracy after the gray-scale transformation, indicating that it does not need" + }, + { + "type": "image", + "bbox": [ + 0.497, + 0.216, + 0.824, + 0.361 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.494, + 0.362, + 0.828, + 0.515 + ], + "angle": 0, + "content": "Figure 5: The percentage of change in accuracy (y-axis) by converting colorful images to grayscale in each COLORBENCH task (x-axis). Each violin plot visualizes the distribution over all VLMs. Higher (lower) percentage indicates that VLMs rely more (less) on color clues for the task. Positive (negative) percentage indicates degradation (improvement) on grayscale images. Color clues are indeed more or less leveraged by VLMs in most tasks but they might mislead VLMs (illusion & mimicry)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.518, + 0.825, + 0.547 + ], + "angle": 0, + "content": "color clues for the task, and colors might even mislead VLM's judgment. Lower negative values represent the severe harm the color can have on the task." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.552, + 0.827, + 0.705 + ], + "angle": 0, + "content": "The accuracy changing ratio distributions across all VLMs and tasks are presented in Figure 5 as the violin plot. As shown in the figure, for most of the tasks, the ratios of VLMs are above 0, indicating that VLMs indeed leverage color clues to correctly solve the tasks; removing the color directly harms the original accuracies dramatically. However, when it comes to Color Illusion and Color Mimicry, the majority of the changing ratios are below 0, which means that VLMs can get better accuracies when all the color information is removed. This phenomenon is reasonable as the colors on both of these two tasks are more likely serving as the misleading factors. In the meantime, for the Color Counting and Color Blindness tasks, almost half the accuracies increase and half decrease, indicating that the color clues might not be so significant in this task, thus, some of the models can find other ways to get the answer. We also investigate the correlation between accuracy changing ratios and model sizes, while no significant correlation can be concluded." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.724, + 0.564, + 0.74 + ], + "angle": 0, + "content": "4 Conclusion, Limitation, and Future Works" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.755, + 0.829, + 0.908 + ], + "angle": 0, + "content": "In this paper, we introduce COLORBENCH, the first benchmark designed to comprehensively evaluate the color understanding capabilities of VLMs, including Perception, Reasoning, and Robustness. After evaluating 32 widely used VLMs on our benchmark, several undiscovered observations are revealed by us. These observations emphasize the need for more sophisticated model architectures that integrate deeper color reasoning capabilities. To ensure high-quality and reliable annotations, COLORBENCH relies on manual data collection, annotation, and assessment across most domains. While this guarantees consistency, it inevitably limits dataset scale, style diversity, and category coverage. As future work, we aim to develop a trustworthy automated data collection pipeline and expand COLORBENCH to larger-scale, more diverse tasks involving complex interplays of color with texture, shape, and spatial relationships. Furthermore, investigating the impact of different visual encoders and language models could further elucidate the pathways through which VLMs process color information." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.506, + 0.948 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.174, + 0.09, + 0.27, + 0.107 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.113, + 0.826, + 0.152 + ], + "angle": 0, + "content": "[1] Basit Alawode, Iyyakutti Iyappan Ganapathi, Sajid Javed, Naoufel Werghi, Mohammed Bennamoun, and Arif Mahmood. Aquaticclip: A vision-language foundation model for underwater scene analysis. arXiv preprint arXiv:2502.01785, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.161, + 0.826, + 0.213 + ], + "angle": 0, + "content": "[2] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.222, + 0.825, + 0.249 + ], + "angle": 0, + "content": "[3] Jirayu Burapacheep, Ishan Gaur, Agam Bhatia, and Tristan Thrush. Colorswap: A color and word order dataset for multimodal evaluation. arXiv preprint arXiv:2402.04492, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.258, + 0.826, + 0.297 + ], + "angle": 0, + "content": "[4] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.307, + 0.826, + 0.358 + ], + "angle": 0, + "content": "[5] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24185–24198, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.368, + 0.825, + 0.394 + ], + "angle": 0, + "content": "[6] Kanjar De and Marius Pedersen. Impact of colour on robustness of deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 21-30, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.403, + 0.465, + 0.418 + ], + "angle": 0, + "content": "[7] Google DeepMind. Gemini 2.0 flash, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.427, + 0.826, + 0.453 + ], + "angle": 0, + "content": "[8] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4829-4837, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.462, + 0.826, + 0.514 + ], + "angle": 0, + "content": "[9] Hao Fei, Yuan Yao, Zhuosheng Zhang, Fuxiao Liu, Ao Zhang, and Tat-Seng Chua. From multimodal llm to human-level ai: Modality, instruction, reasoning, efficiency and beyond. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 1-8, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.523, + 0.826, + 0.562 + ], + "angle": 0, + "content": "[10] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.572, + 0.825, + 0.598 + ], + "angle": 0, + "content": "[11] Karl R. Gegenfurtner and Jochem Rieger. Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10(13):805-808, 2000." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.608, + 0.825, + 0.646 + ], + "angle": 0, + "content": "[12] Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.656, + 0.826, + 0.709 + ], + "angle": 0, + "content": "[13] Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14375-14385, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.717, + 0.825, + 0.743 + ], + "angle": 0, + "content": "[14] Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv preprint arXiv:2204.13653, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.753, + 0.825, + 0.779 + ], + "angle": 0, + "content": "[15] Shuai He, Anlong Ming, Li Yaqi, Sun Jinyuan, Zheng ShunTian, and Ma Huadong. Thinking image color aesthetics assessment: Models, datasets and benchmarks. ICCV, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.788, + 0.826, + 0.816 + ], + "angle": 0, + "content": "[16] Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, and Tae-Hyun Oh. Vlm's eye examination: Instruct and inspect visual competency of vision language models. arXiv preprint arXiv:2409.14759, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.824, + 0.826, + 0.864 + ], + "angle": 0, + "content": "[17] Md Farhan Ishmam, Ishmam Tashdeed, Talukder Asir Saadat, Md Hamjajul Ashmafee, Abu Raihan Mostofa Kamal, and Md Azam Hossain. Visual robustness benchmark for visual question answering (vqa). arXiv preprint arXiv:2407.03386, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.873, + 0.826, + 0.912 + ], + "angle": 0, + "content": "[18] Ali Jahanian, Shaiyan Keshvari, SVN Vishwanathan, and Jan P Allebach. Colors-messengers of concepts: Visual design mining for learning color semantics. ACM Transactions on Computer-Human Interaction (TOCHI), 24(1):1-39, 2017." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.113, + 0.826, + 0.912 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.12 + ], + "angle": 0, + "content": "[19] Johannes Jakubik, Benedikt Blumenstiel, and Clive Tinashe Marimo. Ms-clip: Multi-spectral vision language learning for earth observation. In American Geophysical Union Fall Meeting, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.129, + 0.826, + 0.168 + ], + "angle": 0, + "content": "[20] Jayendra Kantipudi, Shiv Ram Dubey, and Soumendu Chakraborty. Color channel perturbation attacks for fooling convolutional neural networks and a defense against such attacks. IEEE Transactions on Artificial Intelligence, 1(2):181-191, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.178, + 0.826, + 0.217 + ], + "angle": 0, + "content": "[21] Tony Lee, Haoqin Tu, Chi Heem Wong, Wenhao Zheng, Yiyang Zhou, Yifan Mai, Josselin Somerville Roberts, Michihiro Yasunaga, Huaxiu Yao, Cihang Xie, et al. Vhelm: A holistic evaluation of vision language models. arXiv preprint arXiv:2410.07112, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.226, + 0.825, + 0.254 + ], + "angle": 0, + "content": "[22] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.263, + 0.826, + 0.302 + ], + "angle": 0, + "content": "[23] Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, and Deva Ramanan. Naturalbench: Evaluating vision-language models on natural adversarial samples. arXiv preprint arXiv:2410.14669, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.312, + 0.826, + 0.339 + ], + "angle": 0, + "content": "[24] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.348, + 0.826, + 0.387 + ], + "angle": 0, + "content": "[25] Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, and Chengjie Wang. A survey on benchmarks of multimodal large language models, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.397, + 0.826, + 0.437 + ], + "angle": 0, + "content": "[26] Ming Li, Chenguang Wang, Yijun Liang, Xiyao Wang, Yuhang Zhou, Xiyang Wu, Yuqing Zhang, Ruiyi Zhang, and Tianyi Zhou. Caughtcheating: Is your mllm a good cheating detective? exploring the boundary of visual perception and reasoning. arXiv preprint arXiv:2507.00045, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.446, + 0.826, + 0.473 + ], + "angle": 0, + "content": "[27] Ming Li, Ruiyi Zhang, Jian Chen, Jiumiang Gu, Yufan Zhou, Franck Dernoncourt, Wanrong Zhu, Tianyi Zhou, and Tong Sun. Towards visual text grounding of multimodal large language model, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.482, + 0.826, + 0.51 + ], + "angle": 0, + "content": "[28] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.519, + 0.826, + 0.557 + ], + "angle": 0, + "content": "[29] Zongxia Li, Xiyang Wu, Hongyang Du, Huy Nghiem, and Guangyao Shi. Benchmark evaluations, applications, and challenges of large vision language models: A survey. arXiv preprint arXiv:2501.02189, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.568, + 0.826, + 0.62 + ], + "angle": 0, + "content": "[30] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll'ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.629, + 0.826, + 0.657 + ], + "angle": 0, + "content": "[31] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava- next: Improved reasoning,OCR,and world knowledge,2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.666, + 0.826, + 0.706 + ], + "angle": 0, + "content": "[32] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision, pages 216-233. Springer, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.715, + 0.826, + 0.742 + ], + "angle": 0, + "content": "[33] Lingjun Mao, Zineng Tang, and Alane Suhr. Evaluating model perception of color illusions in photorealistic scenes. arXiv preprint arXiv:2412.06184, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.751, + 0.826, + 0.778 + ], + "angle": 0, + "content": "[34] Daniela Mapelli and Marlene Behrmann. The role of color in object recognition: Evidence from visual agnosia. Neurocase, 3(4):237-247, 1997." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.787, + 0.826, + 0.815 + ], + "angle": 0, + "content": "[35] OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and etc. Gpt-4o system card, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.824, + 0.826, + 0.863 + ], + "angle": 0, + "content": "[36] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.873, + 0.826, + 0.911 + ], + "angle": 0, + "content": "[37] Ragini Rathore, Zachary Leggon, Laurent Lessard, and Karen B Schloss. Estimating color-concept associations from image statistics. IEEE Transactions on Visualization and Computer Graphics, 26(1): 1226-1235, 2019." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.133 + ], + "angle": 0, + "content": "[38] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.139, + 0.826, + 0.168 + ], + "angle": 0, + "content": "[39] Ahnaf Mozib Samin, M Firoz Ahmed, and Md Mushtaq Shahriyar Rafee. Colorfoil: Investigating color blindness in large vision and language models. arXiv preprint arXiv:2405.11685, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.175, + 0.827, + 0.215 + ], + "angle": 0, + "content": "[40] Haz Sameen Shahgir, Khondker Salman Sayeed, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yue Dong, and Rifat Shahriyar. Illusionvqa: A challenging optical illusion dataset for vision language models. arXiv preprint arXiv:2403.15952, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.223, + 0.826, + 0.262 + ], + "angle": 0, + "content": "[41] Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, De-An Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, et al. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.271, + 0.827, + 0.31 + ], + "angle": 0, + "content": "[42] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.318, + 0.825, + 0.358 + ], + "angle": 0, + "content": "[43] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.367, + 0.826, + 0.405 + ], + "angle": 0, + "content": "[44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.414, + 0.825, + 0.442 + ], + "angle": 0, + "content": "[45] Hanna-Sophia Widhoelzl and Ece Takmaz. Decoding emotions in abstract art: Cognitive plausibility of clip in recognizing color-emotion associations. arXiv preprint arXiv:2405.06319, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.449, + 0.826, + 0.488 + ], + "angle": 0, + "content": "[46] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.497, + 0.826, + 0.549 + ], + "angle": 0, + "content": "[47] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.558, + 0.825, + 0.585 + ], + "angle": 0, + "content": "[48] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.593, + 0.827, + 0.631 + ], + "angle": 0, + "content": "[49] Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Mllms know where to look: Training-free perception of small visual details with multimodal llms. arXiv preprint arXiv:2502.17422, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.64, + 0.827, + 0.668 + ], + "angle": 0, + "content": "[50] Le Zhang, Rabiul Awal, and Aishwarya Agrawal. Contrasting intra-modal and ranking cross-modal hard negatives to enhance visio-linguistic fine-grained understanding. arXiv preprint arXiv:2306.08832, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.676, + 0.827, + 0.715 + ], + "angle": 0, + "content": "[51] Tiancheng Zhao, Tianqi Zhang, Mingwei Zhu, Haozhan Shen, Kyusong Lee, Xiaopeng Lu, and Jianwei Yin. VI-checklist: Evaluating pre-trained vision-language models with objects, attributes and relations. arXiv preprint arXiv:2207.00221, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.724, + 0.827, + 0.763 + ], + "angle": 0, + "content": "[52] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.771, + 0.827, + 0.81 + ], + "angle": 0, + "content": "[53] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127:302-321, 2019." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.827, + 0.81 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.09, + 0.44, + 0.109 + ], + "angle": 0, + "content": "Table of Contents for Appendix" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.125, + 0.826, + 0.139 + ], + "angle": 0, + "content": "A Related Works 14" + }, + { + "type": "text", + "bbox": [ + 0.197, + 0.145, + 0.826, + 0.16 + ], + "angle": 0, + "content": "A.1 VLM Benchmarks 14" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.166, + 0.825, + 0.181 + ], + "angle": 0, + "content": "A.2 Color Evaluation 14" + }, + { + "type": "list", + "bbox": [ + 0.197, + 0.145, + 0.826, + 0.181 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.2, + 0.826, + 0.214 + ], + "angle": 0, + "content": "B Data Sources 14" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.233, + 0.826, + 0.247 + ], + "angle": 0, + "content": "C Detailed Generation Process for Robustness 15" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.267, + 0.826, + 0.281 + ], + "angle": 0, + "content": "D COLORBENCH Categories and Questions 15" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.3, + 0.826, + 0.314 + ], + "angle": 0, + "content": "E Implementation Details 19" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.333, + 0.826, + 0.347 + ], + "angle": 0, + "content": "F Evaluation Prompts 19" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.366, + 0.826, + 0.38 + ], + "angle": 0, + "content": "G Human Evaluation 19" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.399, + 0.826, + 0.414 + ], + "angle": 0, + "content": "H Reasoning Models with Thinking Process 19" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.433, + 0.826, + 0.447 + ], + "angle": 0, + "content": "I Qualitative Analysis of Failure Cases 20" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.466, + 0.826, + 0.48 + ], + "angle": 0, + "content": "J Effect of Different Modalities 24" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.499, + 0.826, + 0.514 + ], + "angle": 0, + "content": "K Fine-tuning Experiments on ColorBench 24" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.533, + 0.826, + 0.546 + ], + "angle": 0, + "content": "L More Visualizations 25" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.553, + 0.826, + 0.568 + ], + "angle": 0, + "content": "L.1 VLM Size & Model Performance for Each Task 25" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.574, + 0.826, + 0.588 + ], + "angle": 0, + "content": "L.2 Vision Size & Model Performance for Each Task 27" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.594, + 0.826, + 0.609 + ], + "angle": 0, + "content": "L.3 Performance for Each Model Family on Each Task 28" + }, + { + "type": "list", + "bbox": [ + 0.198, + 0.553, + 0.826, + 0.609 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.628, + 0.826, + 0.643 + ], + "angle": 0, + "content": "M Samples Cases 30" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.648, + 0.826, + 0.663 + ], + "angle": 0, + "content": "M.1 Effect of CoT 30" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.669, + 0.826, + 0.684 + ], + "angle": 0, + "content": "M.2 Effect of Grayscale 35" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.689, + 0.826, + 0.704 + ], + "angle": 0, + "content": "M.3 Failure with LLM and Vision 36" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.711, + 0.826, + 0.726 + ], + "angle": 0, + "content": "M.4 Easy Cases 37" + }, + { + "type": "text", + "bbox": [ + 0.198, + 0.731, + 0.826, + 0.745 + ], + "angle": 0, + "content": "M.5 Difficult Cases 39" + }, + { + "type": "list", + "bbox": [ + 0.198, + 0.648, + 0.826, + 0.745 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.176, + 0.09, + 0.335, + 0.106 + ], + "angle": 0, + "content": "A Related Works" + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.122, + 0.347, + 0.137 + ], + "angle": 0, + "content": "A.1 VLM Benchmarks" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.149, + 0.827, + 0.384 + ], + "angle": 0, + "content": "With the rapid advancements in Vision-Language Models (VLMs) [9], numerous benchmarks have emerged to systematically evaluate VLM capabilities across diverse dimensions [29]. These benchmarks generally fall into two categories: text-centric and vision-centric evaluations, each designed to assess distinct multimodal competencies. Text-centric benchmarks primarily measure commonsense knowledge, reasoning, and complex problem-solving capabilities, exemplified by tasks in MMMU [47] and NaturalBench [23]. Conversely, vision-centric benchmarks focus on visual perception and reasoning (MMBench [32] and MME [10]), and robustness to visual perturbations (Grit [14] and Visual Robustness [17]). Furthermore, several benchmarks have extended their scope to evaluate specialized visual tasks, such as spatial relationship comprehension (SEED-Bench [22] and MM-Vet [46]), chart and map understanding (MMSTAR [4] and MuirBench [43]), visual grounding (Flickr30k [36] and TRIG [27]) and the detection and understanding of visual hallucinations (POPE [28] and HallusionBench [13]). However, despite the extensive scope covered by existing VLM benchmarks, none currently provide an integrated evaluation that simultaneously assesses visual perception, reasoning, and robustness within a unified framework. Moreover, although certain benchmarks [32, 10] have incorporated color-related questions, these have typically addressed basic color perception and recognition, neglecting deeper assessments of reasoning and robustness associated with color understanding." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.403, + 0.336, + 0.417 + ], + "angle": 0, + "content": "A.2 Color Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.43, + 0.827, + 0.636 + ], + "angle": 0, + "content": "Color understanding is increasingly recognized as a crucial aspect of Vision-Language Models' ability to perceive and interpret visual content. Limited studies have explored how color information influences model performance on specific tasks. Some studies [51, 50] explore the understanding of color by replacing color-related words in textual inputs to evaluate the models' ability to handle color-specific information. More recent research [16, 21] focuses on assessing fine-grained color discrimination by asking models to distinguish subtle color differences in visual inputs. Samin et al. [39] introduced color-related foils to test VLMs' capacity to cognize basic colors like red, white, and green, particularly in contexts requiring attention to subtle cues. Additionally, Burapacheep et al. [3] developed a benchmark dataset to evaluate and enhance compositional color comprehension in VLMs, emphasizing tasks where understanding minimal color relationships is essential. IllusionVQA [40] evaluates model perception of color illusions in photorealistic scenes. While these works have addressed isolated aspects of color understanding, none have provided a holistic assessment framework. In contrast to these previous works, our study establishes the first comprehensive and specialized benchmark for evaluating the color-related abilities of VLMs, offering a quantitative, automated approach to further this area of research." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.66, + 0.319, + 0.675 + ], + "angle": 0, + "content": "B Data Sources" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.692, + 0.822, + 0.72 + ], + "angle": 0, + "content": "We conduct COLORBENCH from multiple sources, including website sources, publicly available benchmarks, and generated images. The detailed sources are included in Table 5." + }, + { + "type": "table_caption", + "bbox": [ + 0.376, + 0.747, + 0.621, + 0.759 + ], + "angle": 0, + "content": "Table 5: Data sources for each task." + }, + { + "type": "table", + "bbox": [ + 0.306, + 0.761, + 0.689, + 0.91 + ], + "angle": 0, + "content": "
CategoryData Source
C'RecognitionWebsite, ICAA17K [15]
C'RecognitionWebsite, ICAA17K [15]
C'ExtractionSynthetic Data
C'ProportionWebsite, Synthetic Data
C'ComparisonWebsite
C'CountingWebsite, Synthetic Data
C'OuntingWebsite, ADA20K [52, 53], COCO2017 [30]
C'MimicryWebsite, IllusionVQA[40], RCID[33]
C'BlindnessSynthetic Data
C'RobustCV-Bench[42]
" + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.937, + 0.508, + 0.947 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.393, + 0.099, + 0.604, + 0.112 + ], + "angle": 0, + "content": "Table 6: Recoloring strategies." + }, + { + "type": "table", + "bbox": [ + 0.207, + 0.112, + 0.787, + 0.214 + ], + "angle": 0, + "content": "
StrategyEditing RegionPurpose
Entire ImageWhole imageAssesses the model's robustness to global color shifts
Target SegmentSegment containing the object referenced in the questionEvaluates the model's sensitivity to task-relevant color changes
Largest SegmentThe largest segment that is irrelevant to the questionTests whether changes in dominant but unrelated regions affect model predictions
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.223, + 0.577, + 0.239 + ], + "angle": 0, + "content": "C Detailed Generation Process for Robustness" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.254, + 0.827, + 0.448 + ], + "angle": 0, + "content": "For the Color Robustness, we evaluate the consistency of VLMs when faced with instances that differ only in the color of the visual input. To systematically assess this effect, we define 3 recoloring strategies that determine which part of the image is altered: i) Target Segment, ii) Largest Segment, and iii) Entire Image. As mentioned in Table 6, Target Segment strategy recolors only the segment containing the object referenced in the question. This strategy ensures that the modification directly affects the model's perception of task-relevant content. Largest Segment strategy alters the color of the largest segment that is irrelevant to the question, testing whether models are distracted by dominant but unrelated visual changes. In contrast, Entire Image strategy applies a global color shift to evaluate the model's sensitivity to overall color variations. As summarized in Table 6, the first two strategies introduce localized modifications, while the third assesses robustness to broader image-wide color changes. Importantly, only color attributes are altered without modifying object shapes or contextual elements, which preserves the overall realism of the image. By incorporating both task-relevant and irrelevant edits, our benchmark provides a comprehensive evaluation of VLMs' ability to handle color perturbations across different contexts." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.454, + 0.828, + 0.689 + ], + "angle": 0, + "content": "While generating color variations, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region. Once the editing region is determined, we modify the color of the corresponding region. In HSV color space, since Saturation and Value control the purity or brightness of the color, and only Hue controls the color of the part, we only adjust the Hue value in the HSV color space. Specifically, we shift the Hue by \\(90^{\\circ}\\), \\(180^{\\circ}\\), and \\(270^{\\circ}\\). These three values ensure that the color manipulations cover significant perceptual differences across the color spectrum. This process produces nine variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation. Additionally, we select questions that are color-invariant, which means answers remain valid regardless of whether the recoloring appears fully natural. This design choice isolates color variation as the sole variable of interest and prevents confounding effects from semantic or contextual changes. Through these steps, we evaluate whether VLMs rely excessively on color information and whether they maintain consistency in their predictions despite substantial color shifts." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.709, + 0.559, + 0.726 + ], + "angle": 0, + "content": "D COLORBENCH Categories and Questions" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.739, + 0.826, + 0.782 + ], + "angle": 0, + "content": "Table 7 provides a detailed description of each task, alongside representative figures and sample questions that effectively demonstrate the specific capabilities being tested. Cases are provided for each task in Figure 6 to 16." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.303, + 0.104, + 0.695, + 0.119 + ], + "angle": 0, + "content": "Table 7: Task and question definition in COLORBENCH." + }, + { + "type": "table", + "bbox": [ + 0.207, + 0.127, + 0.789, + 0.558 + ], + "angle": 0, + "content": "
Task#Sample CaseDescriptionSample Questions
PerceptionColor Recognition76Figure 6Ask for the color of a specific object or determine if a particular color is present in the image.What is the color of object in this image? What color does not exist in this image?
Color Extraction96Figure 7Extract the color code value (e.g., RGB, HSV, or HEX) from a single color in the image.What is the HSV value of the given color in the image? What is the RGB value of the given color in the image?
Object Recognition77Figure 8Identify objects in the image that match a specified color noted in the text input.What object has a color of pink in this image?
ReasoningColor Proportion80Figure 9Estimate the relative area occupied by a specified color in the image.What is the dominant color in this image? What is the closest to the proportion of the red color in the image?
Color Comparison101Figure 10Distinguish among multiple colors present in the image to assess overall tones and shades.Which photo is warmer in overall color? Which object has a darker color in the image?
Color Counting102Figure 11Identify the number of unique colors present in the image.How many different colors are in this image?
Object Counting103Figure 12Count the number of objects of a specified color present in the image.How many objects with green color are in this image?
Color Illusion93Figure 13Assess and compare colors in potential illusionary settings within the image.Do two objects have the same color?
Color Mimicry70Figure 14Detect objects that are camouflaged within their surroundings, where color is a key deceptive element.How many animals are in this image?
Color Blindness157Figure 15Recognize numbers or text that are embedded in color patterns, often used in tests for color vision.What is the number in the center of the image?
" + }, + { + "type": "title", + "bbox": [ + 0.212, + 0.603, + 0.333, + 0.617 + ], + "angle": 0, + "content": "Color Recognition" + }, + { + "type": "image", + "bbox": [ + 0.209, + 0.625, + 0.317, + 0.707 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.631, + 0.474, + 0.64 + ], + "angle": 0, + "content": "What is the color of the banana in this" + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.642, + 0.352, + 0.651 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.655, + 0.349, + 0.663 + ], + "angle": 0, + "content": "A: Red" + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.666, + 0.357, + 0.674 + ], + "angle": 0, + "content": "C:Yellow" + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.678, + 0.402, + 0.686 + ], + "angle": 0, + "content": "E: None of the above" + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.69, + 0.351, + 0.697 + ], + "angle": 0, + "content": "Ans: E" + }, + { + "type": "text", + "bbox": [ + 0.41, + 0.655, + 0.418, + 0.662 + ], + "angle": 0, + "content": "en" + }, + { + "type": "text", + "bbox": [ + 0.41, + 0.666, + 0.418, + 0.674 + ], + "angle": 0, + "content": "k" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.678, + 0.41, + 0.686 + ], + "angle": 0, + "content": "" + }, + { + "type": "image", + "bbox": [ + 0.493, + 0.641, + 0.613, + 0.699 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.631, + 0.779, + 0.64 + ], + "angle": 0, + "content": "What color does not exist in this image?" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.655, + 0.652, + 0.663 + ], + "angle": 0, + "content": "A:Green" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.666, + 0.646, + 0.674 + ], + "angle": 0, + "content": "C:Red" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.689, + 0.647, + 0.697 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "text", + "bbox": [ + 0.676, + 0.655, + 0.711, + 0.663 + ], + "angle": 0, + "content": "B:White" + }, + { + "type": "text", + "bbox": [ + 0.677, + 0.666, + 0.71, + 0.674 + ], + "angle": 0, + "content": "D: Black" + }, + { + "type": "image_caption", + "bbox": [ + 0.346, + 0.716, + 0.652, + 0.731 + ], + "angle": 0, + "content": "Figure 6: Cases for Color Recognition Task." + }, + { + "type": "title", + "bbox": [ + 0.212, + 0.776, + 0.325, + 0.789 + ], + "angle": 0, + "content": "Color Extraction" + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.797, + 0.311, + 0.872 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.805, + 0.483, + 0.823 + ], + "angle": 0, + "content": "What is the HSV value of the given color in the image?" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.823, + 0.38, + 0.83 + ], + "angle": 0, + "content": "A: [100, 51, 81]" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.83, + 0.388, + 0.838 + ], + "angle": 0, + "content": "C: [331, 100, 100]" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.841, + 0.349, + 0.849 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.41, + 0.824, + 0.474, + 0.831 + ], + "angle": 0, + "content": "B: [329, 98, 100]" + }, + { + "type": "text", + "bbox": [ + 0.41, + 0.832, + 0.479, + 0.839 + ], + "angle": 0, + "content": "D:[329,100,100]" + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.797, + 0.6, + 0.872 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.616, + 0.805, + 0.769, + 0.823 + ], + "angle": 0, + "content": "Q: What is the HSV value of the given color in the image?" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.823, + 0.677, + 0.83 + ], + "angle": 0, + "content": "A: [47, 62, 100]" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.83, + 0.677, + 0.839 + ], + "angle": 0, + "content": "C: [45, 64, 100]" + }, + { + "type": "text", + "bbox": [ + 0.708, + 0.824, + 0.766, + 0.831 + ], + "angle": 0, + "content": "B: [107, 16, 22]" + }, + { + "type": "text", + "bbox": [ + 0.708, + 0.832, + 0.767, + 0.839 + ], + "angle": 0, + "content": "D: [45, 62, 100]" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.841, + 0.647, + 0.849 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "image_caption", + "bbox": [ + 0.35, + 0.879, + 0.647, + 0.894 + ], + "angle": 0, + "content": "Figure 7: Cases for Color Extraction Task." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.51, + 0.948 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.212, + 0.098, + 0.337, + 0.112 + ], + "angle": 0, + "content": "Object Recognition" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.131, + 0.314, + 0.181 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.316, + 0.131, + 0.452, + 0.149 + ], + "angle": 0, + "content": "Which state does not have a color of pink in this image?" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.15, + 0.362, + 0.157 + ], + "angle": 0, + "content": "A: Montana" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.158, + 0.363, + 0.165 + ], + "angle": 0, + "content": "C: Michigan" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.166, + 0.346, + 0.174 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.375, + 0.158, + 0.424, + 0.165 + ], + "angle": 0, + "content": "D:New York" + }, + { + "type": "text", + "bbox": [ + 0.375, + 0.166, + 0.424, + 0.173 + ], + "angle": 0, + "content": "" + }, + { + "type": "image", + "bbox": [ + 0.483, + 0.114, + 0.604, + 0.193 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.607, + 0.131, + 0.767, + 0.149 + ], + "angle": 0, + "content": "Which object has a color of black in this image?" + }, + { + "type": "text", + "bbox": [ + 0.607, + 0.15, + 0.708, + 0.157 + ], + "angle": 0, + "content": "A: Background B: Banana" + }, + { + "type": "text", + "bbox": [ + 0.607, + 0.158, + 0.707, + 0.166 + ], + "angle": 0, + "content": "C:Apple D:Orange" + }, + { + "type": "text", + "bbox": [ + 0.607, + 0.166, + 0.636, + 0.174 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image_caption", + "bbox": [ + 0.342, + 0.199, + 0.655, + 0.214 + ], + "angle": 0, + "content": "Figure 8: Cases for Object Recognition Task." + }, + { + "type": "title", + "bbox": [ + 0.212, + 0.234, + 0.326, + 0.248 + ], + "angle": 0, + "content": "Color Proportion" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.261, + 0.318, + 0.328 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.273, + 0.443, + 0.281 + ], + "angle": 0, + "content": "Which is the dominant color in" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.282, + 0.377, + 0.289 + ], + "angle": 0, + "content": "this painting?" + }, + { + "type": "text", + "bbox": [ + 0.38, + 0.289, + 0.416, + 0.296 + ], + "angle": 0, + "content": "B:Yellow" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.297, + 0.355, + 0.304 + ], + "angle": 0, + "content": "C:Green" + }, + { + "type": "text", + "bbox": [ + 0.38, + 0.297, + 0.42, + 0.304 + ], + "angle": 0, + "content": "D:Orange" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.304, + 0.35, + 0.311 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.246, + 0.611, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.613, + 0.273, + 0.771, + 0.281 + ], + "angle": 0, + "content": "What is closest to the proportion of the" + }, + { + "type": "text", + "bbox": [ + 0.614, + 0.282, + 0.708, + 0.289 + ], + "angle": 0, + "content": "color red in the image?" + }, + { + "type": "text", + "bbox": [ + 0.615, + 0.29, + 0.702, + 0.297 + ], + "angle": 0, + "content": "A:10% B:20%" + }, + { + "type": "text", + "bbox": [ + 0.615, + 0.297, + 0.704, + 0.305 + ], + "angle": 0, + "content": "C:30% D:40%" + }, + { + "type": "text", + "bbox": [ + 0.615, + 0.305, + 0.7, + 0.312 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image_caption", + "bbox": [ + 0.349, + 0.347, + 0.648, + 0.362 + ], + "angle": 0, + "content": "Figure 9: Cases for Color Proportion Task." + }, + { + "type": "title", + "bbox": [ + 0.212, + 0.383, + 0.336, + 0.397 + ], + "angle": 0, + "content": "Color Comparison" + }, + { + "type": "image", + "bbox": [ + 0.208, + 0.405, + 0.317, + 0.462 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.409, + 0.481, + 0.418 + ], + "angle": 0, + "content": "Which photo is warmer in overall color?" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.434, + 0.376, + 0.442 + ], + "angle": 0, + "content": "A: The left one" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.445, + 0.382, + 0.454 + ], + "angle": 0, + "content": "B: The right one" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.456, + 0.35, + 0.464 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.409, + 0.611, + 0.464 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.616, + 0.41, + 0.773, + 0.418 + ], + "angle": 0, + "content": "Which dog has the darkest color in the" + }, + { + "type": "text", + "bbox": [ + 0.616, + 0.421, + 0.648, + 0.43 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.433, + 0.646, + 0.441 + ], + "angle": 0, + "content": "A: No.1" + }, + { + "type": "text", + "bbox": [ + 0.682, + 0.434, + 0.712, + 0.441 + ], + "angle": 0, + "content": "B: No.4" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.444, + 0.648, + 0.452 + ], + "angle": 0, + "content": "C.No.5" + }, + { + "type": "text", + "bbox": [ + 0.682, + 0.444, + 0.712, + 0.452 + ], + "angle": 0, + "content": "D.No.3" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.456, + 0.646, + 0.464 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image_caption", + "bbox": [ + 0.34, + 0.479, + 0.657, + 0.494 + ], + "angle": 0, + "content": "Figure 10: Cases for Color Comparison Task." + }, + { + "type": "title", + "bbox": [ + 0.21, + 0.514, + 0.316, + 0.528 + ], + "angle": 0, + "content": "Color Counting" + }, + { + "type": "image", + "bbox": [ + 0.208, + 0.532, + 0.316, + 0.615 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.544, + 0.484, + 0.553 + ], + "angle": 0, + "content": "How many different colors of flowers are" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.554, + 0.378, + 0.564 + ], + "angle": 0, + "content": "in this image?" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.567, + 0.335, + 0.575 + ], + "angle": 0, + "content": "A:1" + }, + { + "type": "text", + "bbox": [ + 0.387, + 0.568, + 0.404, + 0.575 + ], + "angle": 0, + "content": "B:2" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.579, + 0.336, + 0.586 + ], + "angle": 0, + "content": "C:3" + }, + { + "type": "text", + "bbox": [ + 0.387, + 0.58, + 0.404, + 0.587 + ], + "angle": 0, + "content": "D:4" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.591, + 0.349, + 0.598 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image", + "bbox": [ + 0.488, + 0.543, + 0.613, + 0.601 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.616, + 0.544, + 0.775, + 0.553 + ], + "angle": 0, + "content": "How many colors are there in this flag?" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.567, + 0.635, + 0.575 + ], + "angle": 0, + "content": "A:3" + }, + { + "type": "text", + "bbox": [ + 0.665, + 0.568, + 0.681, + 0.575 + ], + "angle": 0, + "content": "B:4" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.579, + 0.635, + 0.586 + ], + "angle": 0, + "content": "C:5" + }, + { + "type": "text", + "bbox": [ + 0.665, + 0.58, + 0.682, + 0.587 + ], + "angle": 0, + "content": "D:6" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.59, + 0.646, + 0.598 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "image_caption", + "bbox": [ + 0.351, + 0.621, + 0.646, + 0.637 + ], + "angle": 0, + "content": "Figure 11: Cases for Color Counting Task." + }, + { + "type": "title", + "bbox": [ + 0.21, + 0.657, + 0.322, + 0.671 + ], + "angle": 0, + "content": "Object Counting" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.687, + 0.314, + 0.736 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.677, + 0.487, + 0.686 + ], + "angle": 0, + "content": "How many striped animals can be seen in" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.688, + 0.368, + 0.697 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.699, + 0.342, + 0.707 + ], + "angle": 0, + "content": "A:12" + }, + { + "type": "text", + "bbox": [ + 0.387, + 0.7, + 0.407, + 0.707 + ], + "angle": 0, + "content": "B:11" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.71, + 0.342, + 0.717 + ], + "angle": 0, + "content": "C:13" + }, + { + "type": "text", + "bbox": [ + 0.387, + 0.71, + 0.405, + 0.717 + ], + "angle": 0, + "content": "D:0" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.723, + 0.342, + 0.73 + ], + "angle": 0, + "content": "F:10" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.733, + 0.349, + 0.742 + ], + "angle": 0, + "content": "Ans:C" + }, + { + "type": "image", + "bbox": [ + 0.489, + 0.687, + 0.611, + 0.74 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.616, + 0.677, + 0.783, + 0.686 + ], + "angle": 0, + "content": "How many green bananas can be seen in" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.688, + 0.665, + 0.697 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.7, + 0.636, + 0.707 + ], + "angle": 0, + "content": "A:6" + }, + { + "type": "text", + "bbox": [ + 0.676, + 0.7, + 0.694, + 0.707 + ], + "angle": 0, + "content": "B:7" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.71, + 0.635, + 0.717 + ], + "angle": 0, + "content": "C. 5" + }, + { + "type": "text", + "bbox": [ + 0.676, + 0.711, + 0.694, + 0.718 + ], + "angle": 0, + "content": "D. 4" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.723, + 0.635, + 0.73 + ], + "angle": 0, + "content": "E. 0" + }, + { + "type": "text", + "bbox": [ + 0.617, + 0.733, + 0.646, + 0.742 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image_caption", + "bbox": [ + 0.347, + 0.757, + 0.65, + 0.772 + ], + "angle": 0, + "content": "Figure 12: Cases for Object Counting Task." + }, + { + "type": "title", + "bbox": [ + 0.212, + 0.792, + 0.301, + 0.804 + ], + "angle": 0, + "content": "Color Illusion" + }, + { + "type": "image", + "bbox": [ + 0.208, + 0.809, + 0.311, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.316, + 0.805, + 0.467, + 0.822 + ], + "angle": 0, + "content": "Do the blocks labeled a and b have the same color/shade?" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.823, + 0.381, + 0.829 + ], + "angle": 0, + "content": "A: No, a is darker." + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.83, + 0.445, + 0.838 + ], + "angle": 0, + "content": "B: Hard to tell without more context" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.839, + 0.472, + 0.847 + ], + "angle": 0, + "content": "C: Yes, one appears darker due to how our" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.848, + 0.402, + 0.856 + ], + "angle": 0, + "content": "eyes perceive shadows" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.857, + 0.381, + 0.865 + ], + "angle": 0, + "content": "D: No, b is darker" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.866, + 0.345, + 0.873 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "image", + "bbox": [ + 0.483, + 0.817, + 0.599, + 0.868 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.801, + 0.72, + 0.809 + ], + "angle": 0, + "content": "What colors are the two pills?" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.81, + 0.782, + 0.817 + ], + "angle": 0, + "content": "A:Cannot tell from this image, the colors seem to" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.818, + 0.65, + 0.825 + ], + "angle": 0, + "content": "be shifting?!" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.827, + 0.754, + 0.835 + ], + "angle": 0, + "content": "B: Both are the exact same shade of gray" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.836, + 0.779, + 0.843 + ], + "angle": 0, + "content": "C: The left one is bluish-gray and the right one is" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.844, + 0.65, + 0.852 + ], + "angle": 0, + "content": "reddish-grey" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.853, + 0.784, + 0.86 + ], + "angle": 0, + "content": "D: The left one is reddish-gray and the right one is" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.861, + 0.645, + 0.868 + ], + "angle": 0, + "content": "bluish-grey" + }, + { + "type": "text", + "bbox": [ + 0.603, + 0.87, + 0.631, + 0.877 + ], + "angle": 0, + "content": "Ans:B" + }, + { + "type": "image_caption", + "bbox": [ + 0.357, + 0.892, + 0.64, + 0.906 + ], + "angle": 0, + "content": "Figure 13: Cases for Color Illusion Task." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.212, + 0.106, + 0.315, + 0.12 + ], + "angle": 0, + "content": "Color Mimicry" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.128, + 0.322, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.14, + 0.474, + 0.149 + ], + "angle": 0, + "content": "How many seahorses in this image?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.149, + 0.404, + 0.157 + ], + "angle": 0, + "content": "A:0 B:1" + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.157, + 0.404, + 0.166 + ], + "angle": 0, + "content": "C:3 D:5" + }, + { + "type": "list", + "bbox": [ + 0.323, + 0.149, + 0.404, + 0.166 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.323, + 0.168, + 0.353, + 0.175 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.116, + 0.625, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.627, + 0.14, + 0.761, + 0.149 + ], + "angle": 0, + "content": "How many leaves in this image?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.627, + 0.149, + 0.708, + 0.157 + ], + "angle": 0, + "content": "A:1 B:2" + }, + { + "type": "image_footnote", + "bbox": [ + 0.627, + 0.157, + 0.708, + 0.166 + ], + "angle": 0, + "content": "C:3 D:0" + }, + { + "type": "image_footnote", + "bbox": [ + 0.627, + 0.168, + 0.657, + 0.175 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "list", + "bbox": [ + 0.627, + 0.149, + 0.761, + 0.175 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.353, + 0.195, + 0.644, + 0.21 + ], + "angle": 0, + "content": "Figure 14: Cases for Color Mimicry Task." + }, + { + "type": "image_caption", + "bbox": [ + 0.211, + 0.244, + 0.321, + 0.258 + ], + "angle": 0, + "content": "Color Blindness" + }, + { + "type": "image", + "bbox": [ + 0.215, + 0.262, + 0.311, + 0.336 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.263, + 0.472, + 0.272 + ], + "angle": 0, + "content": "There are two strings in the image." + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.275, + 0.474, + 0.284 + ], + "angle": 0, + "content": "What are the strings in the center of" + }, + { + "type": "image_footnote", + "bbox": [ + 0.323, + 0.287, + 0.374, + 0.296 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "list", + "bbox": [ + 0.323, + 0.263, + 0.474, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.323, + 0.299, + 0.415, + 0.308 + ], + "angle": 0, + "content": "A:kt B:la" + }, + { + "type": "text", + "bbox": [ + 0.323, + 0.312, + 0.413, + 0.32 + ], + "angle": 0, + "content": "C:lo D:It" + }, + { + "type": "list", + "bbox": [ + 0.323, + 0.299, + 0.415, + 0.32 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.323, + 0.325, + 0.355, + 0.333 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.261, + 0.613, + 0.336 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.631, + 0.263, + 0.781, + 0.272 + ], + "angle": 0, + "content": "What is the number in the center of" + }, + { + "type": "image_footnote", + "bbox": [ + 0.631, + 0.275, + 0.683, + 0.284 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "list", + "bbox": [ + 0.631, + 0.263, + 0.781, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.632, + 0.3, + 0.721, + 0.308 + ], + "angle": 0, + "content": "A:6 B:9" + }, + { + "type": "text", + "bbox": [ + 0.632, + 0.311, + 0.726, + 0.32 + ], + "angle": 0, + "content": "C:17 D:18" + }, + { + "type": "text", + "bbox": [ + 0.632, + 0.325, + 0.663, + 0.333 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "list", + "bbox": [ + 0.632, + 0.3, + 0.726, + 0.333 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.35, + 0.346, + 0.647, + 0.361 + ], + "angle": 0, + "content": "Figure 15: Cases for Color Blindness Task." + }, + { + "type": "image_caption", + "bbox": [ + 0.315, + 0.391, + 0.391, + 0.403 + ], + "angle": 0, + "content": "Original Image" + }, + { + "type": "image", + "bbox": [ + 0.316, + 0.408, + 0.397, + 0.458 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.326, + 0.468, + 0.383, + 0.478 + ], + "angle": 0, + "content": "Entire Image" + }, + { + "type": "image", + "bbox": [ + 0.312, + 0.48, + 0.394, + 0.529 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.313, + 0.531, + 0.393, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.313, + 0.582, + 0.394, + 0.632 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.319, + 0.641, + 0.394, + 0.652 + ], + "angle": 0, + "content": "Original Image" + }, + { + "type": "image", + "bbox": [ + 0.311, + 0.654, + 0.403, + 0.701 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.326, + 0.715, + 0.383, + 0.724 + ], + "angle": 0, + "content": "Entire Image" + }, + { + "type": "image", + "bbox": [ + 0.307, + 0.727, + 0.401, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.307, + 0.778, + 0.401, + 0.827 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.308, + 0.829, + 0.402, + 0.877 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.443, + 0.404, + 0.603, + 0.414 + ], + "angle": 0, + "content": "Q: How many cars are in the image?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.495, + 0.422, + 0.629, + 0.431 + ], + "angle": 0, + "content": "A:8 B:7 C:6 D:5 E:4" + }, + { + "type": "image_caption", + "bbox": [ + 0.443, + 0.44, + 0.473, + 0.449 + ], + "angle": 0, + "content": "GT: E" + }, + { + "type": "image_caption", + "bbox": [ + 0.451, + 0.456, + 0.536, + 0.465 + ], + "angle": 0, + "content": "Recoloring Strategy" + }, + { + "type": "image_caption", + "bbox": [ + 0.452, + 0.469, + 0.535, + 0.479 + ], + "angle": 0, + "content": "Targeted Segment" + }, + { + "type": "image", + "bbox": [ + 0.452, + 0.48, + 0.534, + 0.529 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.452, + 0.531, + 0.534, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.452, + 0.581, + 0.534, + 0.631 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.453, + 0.652, + 0.629, + 0.662 + ], + "angle": 0, + "content": "Q: How many curtains are in the image?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.498, + 0.67, + 0.632, + 0.679 + ], + "angle": 0, + "content": "A:3 B:2 C:1 D:4 E:0" + }, + { + "type": "image_caption", + "bbox": [ + 0.453, + 0.688, + 0.483, + 0.696 + ], + "angle": 0, + "content": "GT: C" + }, + { + "type": "image_caption", + "bbox": [ + 0.454, + 0.704, + 0.54, + 0.713 + ], + "angle": 0, + "content": "Recoloring Strategy" + }, + { + "type": "image_caption", + "bbox": [ + 0.454, + 0.714, + 0.537, + 0.724 + ], + "angle": 0, + "content": "Targeted Segment" + }, + { + "type": "image", + "bbox": [ + 0.447, + 0.727, + 0.542, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.447, + 0.777, + 0.542, + 0.826 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.447, + 0.828, + 0.542, + 0.877 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.603, + 0.469, + 0.679, + 0.478 + ], + "angle": 0, + "content": "Largest Segment" + }, + { + "type": "image", + "bbox": [ + 0.6, + 0.48, + 0.683, + 0.529 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.6, + 0.531, + 0.683, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.6, + 0.581, + 0.682, + 0.632 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.595, + 0.715, + 0.675, + 0.724 + ], + "angle": 0, + "content": "Largest Segment" + }, + { + "type": "image", + "bbox": [ + 0.593, + 0.726, + 0.689, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.593, + 0.778, + 0.689, + 0.827 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.593, + 0.828, + 0.689, + 0.877 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.344, + 0.884, + 0.653, + 0.899 + ], + "angle": 0, + "content": "Figure 16: Cases for Color Robustness Task." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.09, + 0.406, + 0.108 + ], + "angle": 0, + "content": "E Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.121, + 0.827, + 0.287 + ], + "angle": 0, + "content": "To further advance our understanding of VLMs' capabilities in color perception, reasoning, and robustness dimensions, we conduct an extensive evaluation of 32 vision-language models (VLMs) spanning a range of large language model (LLM) sizes and architectures. Our evaluation includes state-of-the-art models such as GPT-4o[35], Gemini-2-flash[7], LLaVA-OV[24], LLaVA-NEXT [31], Cambrian[42], InternVL2[5], InternVL2.5[5], Qwen2.5-VL[2], and Eagle[41]. GPT-4o and Gemini-2-flash are used with API calls. We further examine reasoning enhancement via chain-of-thought (CoT) prompting [44], applying it to GPT-4o and Gemini-2-Flash to evaluate how intermediate reasoning steps influence color understanding. Additionally, we include the most recent GPT-o3 on perception and reasoning tasks, which is the most powerful model with a long internal chain-of-thought process. This selection covers a diverse set of architectures, including both proprietary and open-source models, enabling a comprehensive assessment of their reasoning capabilities under different computational constraints." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.293, + 0.828, + 0.351 + ], + "angle": 0, + "content": "To ensure a fair comparison, we standardize our experimental setup across models. Open-source models with fewer than 70B parameters are evaluated using a single NVIDIA A100 80GB GPU, while larger models require four NVIDIA A100 80GB GPUs to accommodate their increased memory and computational demands." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.367, + 0.378, + 0.385 + ], + "angle": 0, + "content": "F Evaluation Prompts" + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.408, + 0.819, + 0.452 + ], + "angle": 0, + "content": "Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Do not explain your reasoning. Answer with only the letter that corresponds to the correct option. Do not repeat the entire answer." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.469, + 0.819, + 0.527 + ], + "angle": 0, + "content": "CoT Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Think step by step before answering. Then conclude with the letter that corresponds to the correct option. Make sure the option letter is in the parentheses like (X). Do not include ( or ) in the response except for the answer." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.548, + 0.373, + 0.565 + ], + "angle": 0, + "content": "G Human Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.578, + 0.827, + 0.678 + ], + "angle": 0, + "content": "To assess the degree of alignment between VLMs and human color understanding, we selected a representative subset of COLORBENCH, focusing specifically on color perception and reasoning tasks. The Color Extraction task was excluded from human annotation, as humans are generally not sensitive to fine-grained differences in color codes. Three human participants were recruited, each tasked with completing 50 samples per category. All evaluators responded to the full set of multiple-choice and judgment-oriented questions. We then gathered all responses and conducted statistical analysis on the collected human evaluations." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.695, + 0.557, + 0.712 + ], + "angle": 0, + "content": "H Reasoning Models with Thinking Process" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.725, + 0.827, + 0.81 + ], + "angle": 0, + "content": "To comprehensively assess the performance of VLMs with the thinking process on COLORBENCH, except for proprietary models with chain-of-thought(CoT) prompt, we additionally conduct experiments with GPT-o3 on perception and reasoning tasks. GPT-o3 is the most recent powerful proprietary VLMs that is trained to think before answering with reinforcement learning. We use the API version of GPT-o3 (2025-04-16) for evaluation. The result is shown in Table 8, together with results of CoT prompting and human evaluation." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.815, + 0.828, + 0.913 + ], + "angle": 0, + "content": "The results presented in Table 8 indicate that human evaluators achieve the highest performance across the majority of tasks, except for three specific categories: Object Recognition (O'Recog), Color Proportion (C'Prop), and Color Comparison (C'Comp), where GPT-o3 holds the highest scores. The performance differences between GPT-o3 and human evaluators on O'Recog and C'Comp tasks are relatively minor (less than \\(3\\%\\)). However, GPT-o3 substantially outperforms both humans and other VLMs on the C'Prop task, with an advantage exceeding \\(12\\%\\). This significant gap on C'Prop aligns with expectations, as humans generally exhibit lower sensitivity to precise quantitative measures." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.121 + ], + "angle": 0, + "content": "Meanwhile, GPT-o3 benefits from including the capability to utilize analytical tools for precise image assessments and continuous exhaustive visual search [26] to obtain better proportion estimations." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.127, + 0.827, + 0.239 + ], + "angle": 0, + "content": "On the remaining tasks, GPT-o3 consistently outperforms GPT-4o (CoT) and Gemini-2-flash (CoT), except for the Color Blindness (C'Blind) task, where GPT-o3 trails GPT-4o (CoT) by \\(3.7\\%\\). The C'Blind task requires VLMs to accurately identify numbers or strings in an image that is composed of colored dots. This task demands capabilities of precise color recognition combined with a holistic spatial perception. One plausible reason for GPT-o3's inferior performance is its longer and more complex reasoning path, which may lead to overthinking. This might cause the model to focus too much on local details or choices of tool, at the expense of the global and intuitive perception needed for this task." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.243, + 0.828, + 0.3 + ], + "angle": 0, + "content": "Overall, these findings highlight the relative strengths and weaknesses of current advanced VLMs compared to human evaluators. Importantly, there remains substantial room for improvement in VLM capabilities, as significant performance gaps persist between VLMs and humans, particularly in reasoning-intensive tasks." + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.321, + 0.829, + 0.39 + ], + "angle": 0, + "content": "Table 8: Performance of proprietary reasoning models with thinking processes on Color Perception and Reasoning Tasks. Models are ranked based on their overall performance on color perception and reasoning (P & R Overall) tasks. The best-performing model within the VLM group is highlighted in bold. For human evaluation, any instance that exceeds the performance of all VLMs is also highlighted in bold." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.39, + 0.825, + 0.489 + ], + "angle": 0, + "content": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: Proprietary
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.4
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.6
GPT-o3 (API)84.257.292.271.682.246.145.658.180.063.166.4
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0-
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.516, + 0.513, + 0.534 + ], + "angle": 0, + "content": "I Qualitative Analysis of Failure Cases" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.547, + 0.825, + 0.618 + ], + "angle": 0, + "content": "To gain deeper insights into VLM failures on color-related tasks, we conduct a detailed case analysis using Qwen2.5-VL-3B and 7B models on different tasks. Following the attention visualization methodology of Zhang et al. [49], we focus on instances where the 3B model fails but the 7B model succeeds, allowing a clearer examination of the underlying capability differences. The visualizations of attention maps are shown in Figure 17 to 25." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.623, + 0.827, + 0.708 + ], + "angle": 0, + "content": "For Color Perception tasks, we analyze the Color Recognition and Object Recognition tasks (excluding Color Extraction, which contains single-color color images). Our preliminary findings show that only a small number of failures arise from incorrect object localization. In most cases, both models correctly attend to the relevant regions but still produce incorrect predictions. This indicates that VLMs cannot accurately interpret color information, rather than deficiencies in visual grounding for these basic perception tasks." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.712, + 0.827, + 0.825 + ], + "angle": 0, + "content": "For Color Reasoning tasks, tasks such as Color Proportion, Color Comparison, Color Counting, and Color Illusion require integrating visual information across the entire image without a clear focus point. Attention maps show that both 3B and 7B models exhibit similar focus patterns but generate different answers, implying that the divergence mainly originates from the language reasoning component rather than the visual encoder. For tasks with explicit perception targets, including Object Counting, Color Mimicry, and Color Blindness, both models attend to the correct regions, yet the 3B model often fails to produce accurate predictions. These results reveal that current VLMs remain weak in color interpretability even when their attention is properly aligned." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.51, + 0.949 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.347, + 0.096, + 0.461, + 0.185 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.466, + 0.102, + 0.632, + 0.111 + ], + "angle": 0, + "content": "What is the color of the banana in this" + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.115, + 0.498, + 0.124 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.128, + 0.498, + 0.136 + ], + "angle": 0, + "content": "A: Red" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.128, + 0.571, + 0.136 + ], + "angle": 0, + "content": "B:Green" + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.14, + 0.506, + 0.148 + ], + "angle": 0, + "content": "C:Yellow" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.14, + 0.568, + 0.148 + ], + "angle": 0, + "content": "D: Black" + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.153, + 0.554, + 0.161 + ], + "angle": 0, + "content": "E: None of the above" + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.165, + 0.498, + 0.173 + ], + "angle": 0, + "content": "Ans: E" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.189, + 0.728, + 0.339 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.266, + 0.346, + 0.731, + 0.362 + ], + "angle": 0, + "content": "Figure 17: Visualized Attention Maps for Color Recognition Tasks." + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.381, + 0.491, + 0.472 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.493, + 0.389, + 0.647, + 0.398 + ], + "angle": 0, + "content": "What object has green color in this" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.402, + 0.53, + 0.412 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.415, + 0.532, + 0.423 + ], + "angle": 0, + "content": "A: Grass" + }, + { + "type": "text", + "bbox": [ + 0.562, + 0.414, + 0.603, + 0.422 + ], + "angle": 0, + "content": "B:Flower" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.427, + 0.526, + 0.435 + ], + "angle": 0, + "content": "C:Leaf" + }, + { + "type": "text", + "bbox": [ + 0.562, + 0.427, + 0.594, + 0.435 + ], + "angle": 0, + "content": "D: Fruit" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.44, + 0.526, + 0.448 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.476, + 0.728, + 0.593 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.263, + 0.601, + 0.734, + 0.617 + ], + "angle": 0, + "content": "Figure 18: Visualized Attention Maps for Object Recognition Tasks." + }, + { + "type": "image", + "bbox": [ + 0.35, + 0.642, + 0.452, + 0.722 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.469, + 0.644, + 0.62, + 0.653 + ], + "angle": 0, + "content": "What color in the pie chart has the" + }, + { + "type": "image_caption", + "bbox": [ + 0.47, + 0.656, + 0.589, + 0.665 + ], + "angle": 0, + "content": "proportion closest to \\(25\\%\\)?" + }, + { + "type": "text", + "bbox": [ + 0.47, + 0.669, + 0.575, + 0.678 + ], + "angle": 0, + "content": "A: Light blue B:Green" + }, + { + "type": "text", + "bbox": [ + 0.47, + 0.682, + 0.57, + 0.691 + ], + "angle": 0, + "content": "C: Purple D:Cyan" + }, + { + "type": "text", + "bbox": [ + 0.47, + 0.695, + 0.502, + 0.702 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.731, + 0.728, + 0.879 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.269, + 0.888, + 0.727, + 0.904 + ], + "angle": 0, + "content": "Figure 19: Visualized Attention Maps for Color Proportion Tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.507, + 0.948 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.329, + 0.095, + 0.472, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.478, + 0.11, + 0.662, + 0.12 + ], + "angle": 0, + "content": "Which lipstick in this image is the darkest" + }, + { + "type": "text", + "bbox": [ + 0.479, + 0.123, + 0.51, + 0.132 + ], + "angle": 0, + "content": "color?" + }, + { + "type": "text", + "bbox": [ + 0.479, + 0.135, + 0.515, + 0.144 + ], + "angle": 0, + "content": "A:ACAI" + }, + { + "type": "text", + "bbox": [ + 0.577, + 0.136, + 0.632, + 0.144 + ], + "angle": 0, + "content": "B: SANGRIA" + }, + { + "type": "text", + "bbox": [ + 0.479, + 0.148, + 0.555, + 0.157 + ], + "angle": 0, + "content": "C:PASSION RED" + }, + { + "type": "text", + "bbox": [ + 0.577, + 0.148, + 0.637, + 0.157 + ], + "angle": 0, + "content": "D: PINK CLAY" + }, + { + "type": "text", + "bbox": [ + 0.479, + 0.161, + 0.512, + 0.17 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.203, + 0.536, + 0.212 + ], + "angle": 0, + "content": "Qwen2.5-VL-3B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.214, + 0.345, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.348, + 0.214, + 0.421, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.214, + 0.497, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.214, + 0.574, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.214, + 0.65, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.214, + 0.726, + 0.261 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.272, + 0.536, + 0.282 + ], + "angle": 0, + "content": "Qwen2.5-VL-7B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.284, + 0.344, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.284, + 0.42, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.284, + 0.497, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.284, + 0.574, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.284, + 0.65, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.284, + 0.726, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.265, + 0.34, + 0.733, + 0.356 + ], + "angle": 0, + "content": "Figure 20: Visualized Attention Maps for Color Comparison Tasks." + }, + { + "type": "image", + "bbox": [ + 0.342, + 0.376, + 0.461, + 0.467 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.392, + 0.643, + 0.401 + ], + "angle": 0, + "content": "How many colors are used for arrows in" + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.403, + 0.522, + 0.413 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.416, + 0.553, + 0.426 + ], + "angle": 0, + "content": "A:6 B:7" + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.429, + 0.554, + 0.438 + ], + "angle": 0, + "content": "C:8 D:9" + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.441, + 0.501, + 0.45 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.484, + 0.536, + 0.492 + ], + "angle": 0, + "content": "Owen2.5-VL-3B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.495, + 0.344, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.495, + 0.42, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.495, + 0.497, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.495, + 0.574, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.495, + 0.65, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.495, + 0.726, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.56, + 0.536, + 0.569 + ], + "angle": 0, + "content": "Qwen2.5-VL-7B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.571, + 0.344, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.571, + 0.42, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.571, + 0.497, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.571, + 0.574, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.571, + 0.65, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.571, + 0.726, + 0.626 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.275, + 0.635, + 0.722, + 0.651 + ], + "angle": 0, + "content": "Figure 21: Visualized Attention Maps for Color Counting Tasks." + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.671, + 0.492, + 0.763 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.682, + 0.645, + 0.692 + ], + "angle": 0, + "content": "How many gray animals are in this" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.695, + 0.53, + 0.704 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.707, + 0.512, + 0.716 + ], + "angle": 0, + "content": "A:5" + }, + { + "type": "text", + "bbox": [ + 0.561, + 0.708, + 0.579, + 0.716 + ], + "angle": 0, + "content": "B:6" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.719, + 0.512, + 0.728 + ], + "angle": 0, + "content": "C:4" + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.719, + 0.546, + 0.728 + ], + "angle": 0, + "content": "D:3" + }, + { + "type": "text", + "bbox": [ + 0.559, + 0.719, + 0.577, + 0.728 + ], + "angle": 0, + "content": "E:7" + }, + { + "type": "text", + "bbox": [ + 0.587, + 0.719, + 0.599, + 0.728 + ], + "angle": 0, + "content": "" + }, + { + "type": "text", + "bbox": [ + 0.495, + 0.732, + 0.527, + 0.741 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.771, + 0.536, + 0.781 + ], + "angle": 0, + "content": "Qwen2.5-VL-3B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.784, + 0.344, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.784, + 0.42, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.784, + 0.497, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.784, + 0.574, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.784, + 0.65, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.784, + 0.726, + 0.821 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.463, + 0.831, + 0.536, + 0.84 + ], + "angle": 0, + "content": "Qwen2.5-VL-7B" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.843, + 0.344, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.347, + 0.843, + 0.42, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.424, + 0.843, + 0.497, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.843, + 0.574, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.578, + 0.843, + 0.65, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.655, + 0.843, + 0.726, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.271, + 0.888, + 0.725, + 0.904 + ], + "angle": 0, + "content": "Figure 22: Visualized Attention Maps for Object Counting Tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.272, + 0.114, + 0.526, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.536, + 0.114, + 0.718, + 0.138 + ], + "angle": 0, + "content": "Which circles has the darkest color? The circles are numbered left to right starting" + }, + { + "type": "image_footnote", + "bbox": [ + 0.537, + 0.14, + 0.573, + 0.149 + ], + "angle": 0, + "content": "from 1." + }, + { + "type": "image_footnote", + "bbox": [ + 0.537, + 0.152, + 0.603, + 0.162 + ], + "angle": 0, + "content": "A: All the same" + }, + { + "type": "image_footnote", + "bbox": [ + 0.635, + 0.152, + 0.655, + 0.161 + ], + "angle": 0, + "content": "B:1" + }, + { + "type": "image_footnote", + "bbox": [ + 0.537, + 0.165, + 0.655, + 0.174 + ], + "angle": 0, + "content": "C:2 D:3" + }, + { + "type": "list", + "bbox": [ + 0.537, + 0.152, + 0.655, + 0.174 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.537, + 0.177, + 0.572, + 0.186 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.202, + 0.728, + 0.274 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.281, + 0.282, + 0.718, + 0.298 + ], + "angle": 0, + "content": "Figure 23: Visualized Attention Maps for Color Illusion Tasks." + }, + { + "type": "image", + "bbox": [ + 0.299, + 0.344, + 0.508, + 0.436 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.51, + 0.353, + 0.665, + 0.375 + ], + "angle": 0, + "content": "How many black sea snakes in this images?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.511, + 0.378, + 0.596, + 0.388 + ], + "angle": 0, + "content": "A:0 B:1" + }, + { + "type": "image_footnote", + "bbox": [ + 0.511, + 0.39, + 0.6, + 0.4 + ], + "angle": 0, + "content": "C:2 D:3" + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.378, + 0.6, + 0.4 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.512, + 0.404, + 0.546, + 0.412 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.439, + 0.728, + 0.549 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.277, + 0.554, + 0.721, + 0.57 + ], + "angle": 0, + "content": "Figure 24: Visualized Attention Maps for Color Mimicry Tasks." + }, + { + "type": "image", + "bbox": [ + 0.344, + 0.618, + 0.457, + 0.706 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.632, + 0.642, + 0.642 + ], + "angle": 0, + "content": "What is the number in the center of this" + }, + { + "type": "image_footnote", + "bbox": [ + 0.468, + 0.645, + 0.504, + 0.655 + ], + "angle": 0, + "content": "image?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.468, + 0.658, + 0.554, + 0.667 + ], + "angle": 0, + "content": "A:4 B:7" + }, + { + "type": "image_footnote", + "bbox": [ + 0.468, + 0.67, + 0.559, + 0.679 + ], + "angle": 0, + "content": "C:18 D:22" + }, + { + "type": "list", + "bbox": [ + 0.468, + 0.658, + 0.559, + 0.679 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.468, + 0.683, + 0.501, + 0.692 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "image", + "bbox": [ + 0.272, + 0.717, + 0.728, + 0.865 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.274, + 0.874, + 0.724, + 0.89 + ], + "angle": 0, + "content": "Figure 25: Visualized Attention Maps for Color Blindness Tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.172, + 0.09, + 0.452, + 0.108 + ], + "angle": 0, + "content": "J Effect of Different Modalities" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.12, + 0.828, + 0.19 + ], + "angle": 0, + "content": "To investigate the impact of color information, we compare model performance on RGB versus grayscale images, thereby isolating the role of color within the image modality. To further explore the contribution of the image modality, we also conduct experiments using textual input only (questions and answer choices), where the original input images are substituted with pure black images of identical dimensions." + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.207, + 0.826, + 0.236 + ], + "angle": 0, + "content": "Table 9: Average Accuracy (\\%) across three input settings (Text-only, Grayscale+Text, RGB+Text) on Color Perception and Reasoning tasks." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.24, + 0.825, + 0.507 + ], + "angle": 0, + "content": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: < 7B
Text-only29.230.631.629.635.324.520.635.541.723.429.3
Gray+Text25.933.542.729.137.123.223.342.453.723.032.1
RGB+Text55.335.763.637.342.422.526.137.550.625.037.4
VLMs: 7B - 8B
Text-only23.735.432.320.629.718.419.336.736.921.126.7
Gray+Text25.235.746.027.841.322.227.548.258.723.634.2
RGB+Text60.442.473.041.849.122.732.741.550.023.441.1
VLMs: 10B - 30B
Text-only26.933.632.825.034.726.522.338.240.018.928.9
Gray+Text26.837.946.822.546.522.430.143.060.326.035.0
RGB+Text68.441.579.743.051.325.334.433.855.426.643.2
VLMs: 30B - 70B
Text-only28.936.531.816.329.015.416.342.733.615.925.6
Gray+Text28.742.151.226.349.924.325.648.865.122.736.7
RGB+Text73.448.881.649.555.224.737.336.161.125.546.2
VLMs: > 70B
Text-only26.047.435.720.936.921.624.035.833.921.829.8
Gray+Text25.340.954.625.351.021.828.644.654.326.136.1
RGB+Text73.454.782.545.662.426.739.633.953.929.647.6
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.52, + 0.825, + 0.577 + ], + "angle": 0, + "content": "Table 9 presents the average accuracy across models grouped by LLM size. The result demonstrates that removing the visual modality (text-only setting) leads to the lowest performance across the majority of tasks. The performance differences among the three input settings allow us to disentangle the impact of textual input, image context (excluding color), and color information itself." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.582, + 0.825, + 0.639 + ], + "angle": 0, + "content": "Notably, in tasks such as Color Recognition and Object Recognition, the performance gap between text-only and grayscale experiments is relatively small, whereas both are significantly outperformed by the RGB input setting. This suggests that color cues play a substantially more important role than either contextual visual or textual information in these tasks." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.657, + 0.553, + 0.675 + ], + "angle": 0, + "content": "K Fine-tuning Experiments on ColorBench" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.687, + 0.828, + 0.744 + ], + "angle": 0, + "content": "We conduct a series of fine-tuning experiments to investigate model adaptation on specialized color-centric tasks. These experiments leverage three synthetic datasets designed for Color Extraction, Color Illusion, and Color Blindness. Using our synthetic data generation pipeline, we curate dedicated training sets for this purpose, with sample counts summarized in Table 10." + }, + { + "type": "table_caption", + "bbox": [ + 0.232, + 0.763, + 0.764, + 0.779 + ], + "angle": 0, + "content": "Table 10: Number of synthetic samples generated for fine-tuning experiments." + }, + { + "type": "table", + "bbox": [ + 0.369, + 0.782, + 0.626, + 0.848 + ], + "angle": 0, + "content": "
TaskNumber of Samples
Color Extraction2400
Color Illusion2400
Color Blindness2280
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.859, + 0.825, + 0.888 + ], + "angle": 0, + "content": "To systematically assess the influence of different model components, we perform a comprehensive ablation study on Qwen2.5-VL-3B and Qwen2.5-VL-7B with the following settings:" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.898, + 0.304, + 0.913 + ], + "angle": 0, + "content": "- MLP only" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.217, + 0.092, + 0.366, + 0.106 + ], + "angle": 0, + "content": "- Vision encoder only" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.125, + 0.442, + 0.14 + ], + "angle": 0, + "content": "- MLP + Vision encoder (jointly)" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.158, + 0.358, + 0.172 + ], + "angle": 0, + "content": "- LLM (LoRA) only" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.191, + 0.377, + 0.205 + ], + "angle": 0, + "content": "- LLM (LoRA) + MLP" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.224, + 0.442, + 0.238 + ], + "angle": 0, + "content": "- LLM (LoRA) + Vision encoder" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.257, + 0.548, + 0.273 + ], + "angle": 0, + "content": "- LLM (LoRA) + MLP + Vision encoder (jointly)" + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.092, + 0.548, + 0.273 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.295, + 0.825, + 0.324 + ], + "angle": 0, + "content": "For configurations involving the LLM, we adopt the LoRA approach to update a subset of its parameters, while the remaining modules are fully fine-tuned." + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.36, + 0.825, + 0.389 + ], + "angle": 0, + "content": "Table 11: Accuracy (%) of Qwen2.5-VL (3B and 7B) under different training strategies across ColorBench tasks. Bold numbers indicate the best results within each model group." + }, + { + "type": "table", + "bbox": [ + 0.175, + 0.394, + 0.825, + 0.546 + ], + "angle": 0, + "content": "
ModelTrainable ModulesColor PerceptionColor ReasoningP&R
LLM (LoRA)MLPVisionC'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
Qwen2.5-3B72.438.574.043.848.522.625.243.045.724.241.1
71.153.175.350.049.522.526.245.244.325.543.6
73.753.179.246.345.529.427.248.447.125.544.4
75.056.375.347.549.528.425.246.247.128.045.2
71.175.070.145.051.526.527.245.247.127.446.2
69.777.174.040.053.523.532.051.645.737.648.8
71.175.071.446.349.525.527.249.448.631.446.7
72.475.071.445.051.524.332.046.250.028.047.1
Qwen2.5-7B76.349.084.447.552.519.634.044.155.728.746.2
72.442.784.442.559.420.629.145.247.128.745.2
77.659.481.847.556.425.529.151.650.035.651.2
78.961.580.541.355.420.629.147.348.630.147.7
75.078.183.151.360.421.635.052.754.335.652.4
72.482.383.151.357.419.630.151.652.933.151.2
75.083.383.145.056.415.730.153.854.333.151.5
77.682.383.150.055.523.331.152.755.733.151.7
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.575, + 0.825, + 0.659 + ], + "angle": 0, + "content": "The evaluation results with finetuned VLMs are shown in Table 11. Overall, models that include LoRA fine-tuning on the LLM component consistently outperform those without it, exhibiting a substantial improvement in overall accuracy. Importantly, the improvements are not confined to the directly targeted tasks (Color Extraction, Color Illusion, Color Blindness). These experiments show that fine-tuning the model on part of tasks also produces notable gains on some ancillary reasoning tasks, including Color Proportion, and Color Comparison." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.665, + 0.827, + 0.737 + ], + "angle": 0, + "content": "However, the transfer of knowledge is not universally positive. Certain tasks demonstrated limited or even negative performance transfer, indicating that fine-tuning exclusively on specialized color objectives does not guarantee generalization across the full spectrum of color perception and reasoning. This finding underscores that while targeted training enhances specialized abilities, a balanced and robust performance profile necessitates the inclusion of more diverse data and training objectives." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.77, + 0.377, + 0.785 + ], + "angle": 0, + "content": "L More Visualizations" + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.81, + 0.548, + 0.825 + ], + "angle": 0, + "content": "L.1 VLM Size & Model Performance for Each Task" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.843, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Figure 26 to 35 present detailed correlations between the log-scaled sizes of VLM parameters and the performance metrics for each task of Perception and Reasoning Categories. Deeper color represents higher accuracy. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.088, + 0.488, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.18, + 0.241, + 0.482, + 0.258 + ], + "angle": 0, + "content": "Figure 26: Heatmap for Color Recognition." + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.088, + 0.824, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.521, + 0.241, + 0.813, + 0.258 + ], + "angle": 0, + "content": "Figure 27: Heatmap for Color Extraction." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.305, + 0.487, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.176, + 0.459, + 0.485, + 0.475 + ], + "angle": 0, + "content": "Figure 28: Heatmap for Object Recognition." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.305, + 0.824, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.52, + 0.459, + 0.814, + 0.475 + ], + "angle": 0, + "content": "Figure 29: Heatmap for Color Proportion." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.522, + 0.487, + 0.67 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.179, + 0.676, + 0.482, + 0.692 + ], + "angle": 0, + "content": "Figure 30: Heatmap for Color Comparison." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.522, + 0.824, + 0.67 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.525, + 0.676, + 0.808, + 0.692 + ], + "angle": 0, + "content": "Figure 31: Heatmap for Color Counting." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.74, + 0.486, + 0.886 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.185, + 0.893, + 0.476, + 0.909 + ], + "angle": 0, + "content": "Figure 32: Heatmap for Object Counting." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.74, + 0.824, + 0.886 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.531, + 0.893, + 0.802, + 0.909 + ], + "angle": 0, + "content": "Figure 33: Heatmap for Color Illusion." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "26" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.088, + 0.488, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.191, + 0.242, + 0.47, + 0.258 + ], + "angle": 0, + "content": "Figure 34: Heatmap for Color Mimicry." + }, + { + "type": "image", + "bbox": [ + 0.509, + 0.088, + 0.825, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.524, + 0.242, + 0.81, + 0.258 + ], + "angle": 0, + "content": "Figure 35: Heatmap for Color Blindness." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.338, + 0.553, + 0.352 + ], + "angle": 0, + "content": "L.2 Vision Size & Model Performance for Each Task" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.388, + 0.825, + 0.473 + ], + "angle": 0, + "content": "Figure 36 to 40 show detailed correlations between the log-scaled sizes of vision encoders and the performance metrics for each task of Perception and Reasoning Categories. Colors represent different model families. Models that have the same vision encoder sizes but with different LLM sizes are plotted as different points. Given that the majority of Vision-Language Models (VLMs) utilize a singular type of vision encoder, and that the sizes of these encoders generally range between 300-400M, it becomes challenging to assess the scaling effects within vision encoders." + }, + { + "type": "image", + "bbox": [ + 0.172, + 0.541, + 0.487, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.694, + 0.49, + 0.721 + ], + "angle": 0, + "content": "Figure 36: The scatter plot for Color Recognition and Color Extraction." + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.541, + 0.822, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.693, + 0.826, + 0.722 + ], + "angle": 0, + "content": "Figure 37: The scatter plot for Object Recognition and Color Proportion." + }, + { + "type": "image", + "bbox": [ + 0.172, + 0.727, + 0.487, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.88, + 0.49, + 0.909 + ], + "angle": 0, + "content": "Figure 38: The scatter plot for Color Comparison and Color Counting." + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.727, + 0.822, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.88, + 0.826, + 0.908 + ], + "angle": 0, + "content": "Figure 39: The scatter plot for Object Counting and Color Illusion." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "27" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.172, + 0.087, + 0.49, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.24, + 0.489, + 0.268 + ], + "angle": 0, + "content": "Figure 40: The scatter plot for Color Mimicry and Color Blindness." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.35, + 0.57, + 0.366 + ], + "angle": 0, + "content": "L.3 Performance for Each Model Family on Each Task" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.4, + 0.827, + 0.43 + ], + "angle": 0, + "content": "Figures 41 to 47 illustrate task performance across different models within the same model families. In general, models with more parameters tend to perform better on the majority of tasks." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.433, + 0.439, + 0.631 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.636, + 0.438, + 0.663 + ], + "angle": 0, + "content": "Figure 41: Performance of LLaVA-OV models." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.683, + 0.437, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.885, + 0.437, + 0.914 + ], + "angle": 0, + "content": "Figure 43: Performance of Cambrian models." + }, + { + "type": "image", + "bbox": [ + 0.562, + 0.44, + 0.825, + 0.631 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.56, + 0.636, + 0.828, + 0.664 + ], + "angle": 0, + "content": "Figure 42: Performance of LLaVA-NEXT models." + }, + { + "type": "image", + "bbox": [ + 0.562, + 0.683, + 0.825, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.56, + 0.885, + 0.827, + 0.914 + ], + "angle": 0, + "content": "Figure 44: Performance of Eagle models." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "28" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.086, + 0.437, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.29, + 0.437, + 0.318 + ], + "angle": 0, + "content": "Figure 45: Performance of InternVL2 models." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.324, + 0.437, + 0.522 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.527, + 0.436, + 0.555 + ], + "angle": 0, + "content": "Figure 47: Performance of Qwen2.5 models." + }, + { + "type": "image", + "bbox": [ + 0.562, + 0.088, + 0.824, + 0.284 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.56, + 0.29, + 0.827, + 0.318 + ], + "angle": 0, + "content": "Figure 46: Performance of InternVL2.5 models." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "29" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.09, + 0.34, + 0.108 + ], + "angle": 0, + "content": "M Samples Cases" + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.128, + 0.317, + 0.143 + ], + "angle": 0, + "content": "M.1 Effect of CoT" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.159, + 0.828, + 0.219 + ], + "angle": 0, + "content": "In this section, we present cases that the answers are influenced by adding reasoning steps for each task. For most of the tasks in COLORBENCH, adding reasoning steps can significantly improve the model performances. The samples cases of Perception and Reasoning categories are shown in Figure 48 to Figure 57. Case for Robustness category is shown in Figure 58." + }, + { + "type": "title", + "bbox": [ + 0.266, + 0.238, + 0.396, + 0.253 + ], + "angle": 0, + "content": "Color Recognition" + }, + { + "type": "image", + "bbox": [ + 0.194, + 0.264, + 0.284, + 0.317 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.265, + 0.474, + 0.274 + ], + "angle": 0, + "content": "What color does not exist in this image?" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.277, + 0.419, + 0.285 + ], + "angle": 0, + "content": "A:White B:Purple C:Yellow" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.29, + 0.393, + 0.298 + ], + "angle": 0, + "content": "D:Red E:No answer" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.302, + 0.329, + 0.31 + ], + "angle": 0, + "content": "Ans: E" + }, + { + "type": "text", + "bbox": [ + 0.287, + 0.323, + 0.375, + 0.331 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.307, + 0.341, + 0.354, + 0.35 + ], + "angle": 0, + "content": "GPT-40:A" + }, + { + "type": "text", + "bbox": [ + 0.29, + 0.36, + 0.371, + 0.369 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.379, + 0.215, + 0.387 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.388, + 0.466, + 0.403 + ], + "angle": 0, + "content": "Let's analyze the image step by step to determine which color does not exist:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.404, + 0.375, + 0.413 + ], + "angle": 0, + "content": "1. White - The background of the image is white." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.413, + 0.365, + 0.421 + ], + "angle": 0, + "content": "2. Purple - The first figure on the left is purple." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.421, + 0.389, + 0.429 + ], + "angle": 0, + "content": "3. Yellow - The figure near the front is yellow." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.43, + 0.383, + 0.438 + ], + "angle": 0, + "content": "4. Red - The figure in the front right position is red." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.438, + 0.474, + 0.454 + ], + "angle": 0, + "content": "All the colors listed appear in the image. Therefore, the answer is: (E) No answer" + }, + { + "type": "list", + "bbox": [ + 0.182, + 0.404, + 0.474, + 0.454 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.468, + 0.492, + 0.496 + ], + "angle": 0, + "content": "Figure 48: Case with CoT for Color Recognition task." + }, + { + "type": "title", + "bbox": [ + 0.607, + 0.238, + 0.728, + 0.251 + ], + "angle": 0, + "content": "Color Extraction" + }, + { + "type": "image", + "bbox": [ + 0.533, + 0.257, + 0.611, + 0.32 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.264, + 0.81, + 0.287 + ], + "angle": 0, + "content": "What is the HEX value of the given color in the image?" + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.288, + 0.74, + 0.3 + ], + "angle": 0, + "content": "A: #AD2B86 B: #AD3087" + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.301, + 0.737, + 0.311 + ], + "angle": 0, + "content": "C:#AD2D8B D:#050405" + }, + { + "type": "list", + "bbox": [ + 0.634, + 0.288, + 0.74, + 0.311 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.778, + 0.301, + 0.81, + 0.31 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.623, + 0.323, + 0.711, + 0.331 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.644, + 0.342, + 0.691, + 0.351 + ], + "angle": 0, + "content": "GPT-40: B" + }, + { + "type": "text", + "bbox": [ + 0.626, + 0.36, + 0.706, + 0.369 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.379, + 0.551, + 0.387 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.388, + 0.81, + 0.405 + ], + "angle": 0, + "content": "To determine the HEX value of the given color, I'll analyze the image and compare it to the provided options:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.405, + 0.747, + 0.413 + ], + "angle": 0, + "content": "1. The color appears to be a shade of magenta or purple." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.413, + 0.679, + 0.421 + ], + "angle": 0, + "content": "2. I will now compare it with the options:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.421, + 0.578, + 0.428 + ], + "angle": 0, + "content": "-A#AD2B86" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.429, + 0.579, + 0.436 + ], + "angle": 0, + "content": "-(B)#AD3087" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.436, + 0.579, + 0.443 + ], + "angle": 0, + "content": "(C)#AD2D8B " + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.443, + 0.576, + 0.452 + ], + "angle": 0, + "content": "(D) #050405" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.452, + 0.679, + 0.46 + ], + "angle": 0, + "content": "The color matches option (A) #AD2B86" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.46, + 0.663, + 0.468 + ], + "angle": 0, + "content": "Therefore, the correct answer is (A)." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.388, + 0.81, + 0.468 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.483, + 0.826, + 0.524 + ], + "angle": 0, + "content": "Figure 49: Case with CoT for Color Extraction task. Option backgrounds correspond to their color codes." + }, + { + "type": "title", + "bbox": [ + 0.262, + 0.55, + 0.399, + 0.565 + ], + "angle": 0, + "content": "Object Recognition" + }, + { + "type": "image", + "bbox": [ + 0.195, + 0.57, + 0.274, + 0.631 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.576, + 0.458, + 0.585 + ], + "angle": 0, + "content": "Which object has a color of yellow in" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.589, + 0.351, + 0.598 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.601, + 0.411, + 0.61 + ], + "angle": 0, + "content": "A:Apple B:Snowman" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.613, + 0.409, + 0.622 + ], + "angle": 0, + "content": "C: Cucumber D: Umbrella" + }, + { + "type": "text", + "bbox": [ + 0.441, + 0.614, + 0.473, + 0.622 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.634, + 0.375, + 0.643 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.654, + 0.353, + 0.662 + ], + "angle": 0, + "content": "Gemini: D" + }, + { + "type": "text", + "bbox": [ + 0.29, + 0.672, + 0.371, + 0.68 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.691, + 0.217, + 0.698 + ], + "angle": 0, + "content": "Gemini:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.699, + 0.279, + 0.707 + ], + "angle": 0, + "content": "Let's analyze the image." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.708, + 0.424, + 0.716 + ], + "angle": 0, + "content": "(A) The apple has multiple colors like red, green, and yellow." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.717, + 0.289, + 0.724 + ], + "angle": 0, + "content": "(B) The snowman is white." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.724, + 0.289, + 0.731 + ], + "angle": 0, + "content": "(C) The cucumber is green." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.732, + 0.329, + 0.74 + ], + "angle": 0, + "content": "(D) The umbrella has multiple colors." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.741, + 0.454, + 0.748 + ], + "angle": 0, + "content": "Therefore, the correct option is apple, since it can have yellow color." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.749, + 0.196, + 0.757 + ], + "angle": 0, + "content": "(A)" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.77, + 0.49, + 0.798 + ], + "angle": 0, + "content": "Figure 50: Case with CoT for Object Recognition task." + }, + { + "type": "title", + "bbox": [ + 0.606, + 0.551, + 0.728, + 0.565 + ], + "angle": 0, + "content": "Color Proportion" + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.571, + 0.558, + 0.6 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.565, + 0.571, + 0.627, + 0.602 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.576, + 0.805, + 0.598 + ], + "angle": 0, + "content": "What is closest to the proportion of the color white in the flag?" + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.601, + 0.715, + 0.61 + ], + "angle": 0, + "content": "A:5% B:10%" + }, + { + "type": "text", + "bbox": [ + 0.634, + 0.613, + 0.716, + 0.622 + ], + "angle": 0, + "content": "C:15% D:20%" + }, + { + "type": "text", + "bbox": [ + 0.779, + 0.614, + 0.81, + 0.622 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.623, + 0.635, + 0.711, + 0.643 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.654, + 0.689, + 0.661 + ], + "angle": 0, + "content": "GPT40:C" + }, + { + "type": "text", + "bbox": [ + 0.627, + 0.672, + 0.707, + 0.68 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.691, + 0.551, + 0.7 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.7, + 0.719, + 0.708 + ], + "angle": 0, + "content": "To find the proportion of the color white in the flag:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.708, + 0.801, + 0.716 + ], + "angle": 0, + "content": "1. Analyze the Flag's Design: The flag has a white Scandinavian cross." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.717, + 0.72, + 0.724 + ], + "angle": 0, + "content": "This cross consists of a vertical and horizontal bar." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.725, + 0.801, + 0.749 + ], + "angle": 0, + "content": "2. Estimate the Total Area of the Flag: Assume the flag is divided into a grid of even squares, which makes it easier to estimate the area proportions visually." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.75, + 0.801, + 0.784 + ], + "angle": 0, + "content": "3. Estimate the Area of the White Cross: The white cross occupies approximately one-fifth in width and one-third in height of the flag when considering both the horizontal and vertical bars. Since both bars overlap in the middle, the white area is slightly less than" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.784, + 0.589, + 0.791 + ], + "angle": 0, + "content": "a simple addition." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.792, + 0.666, + 0.799 + ], + "angle": 0, + "content": "4. Calculate Approximate Proportion:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.8, + 0.814, + 0.816 + ], + "angle": 0, + "content": "- The horizontal and vertical bars together approximately cover the central columns and rows. Since they overlap, a simple" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.817, + 0.794, + 0.832 + ], + "angle": 0, + "content": "estimation shows the white area as less than a third of the flag's total area." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.833, + 0.799, + 0.85 + ], + "angle": 0, + "content": "- A reasonable estimation of the white area compared to the red areas suggests a proportion closer to \\(20\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.85, + 0.813, + 0.866 + ], + "angle": 0, + "content": "5. Conclusion: Based on the above estimations, a plausible estimation for the proportion of white in the flag is closest to \\(20\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.867, + 0.655, + 0.875 + ], + "angle": 0, + "content": "Therefore, the answer is (D) \\(20\\%\\)." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.7, + 0.814, + 0.875 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.886, + 0.827, + 0.914 + ], + "angle": 0, + "content": "Figure 51: Case with CoT for Color Proportion task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.51, + 0.948 + ], + "angle": 0, + "content": "30" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.264, + 0.109, + 0.397, + 0.124 + ], + "angle": 0, + "content": "Color Comparison" + }, + { + "type": "title", + "bbox": [ + 0.183, + 0.155, + 0.432, + 0.163 + ], + "angle": 0, + "content": "From left to right, which strip has the third reddest color?" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.167, + 0.249, + 0.176 + ], + "angle": 0, + "content": "A: The first one." + }, + { + "type": "text", + "bbox": [ + 0.265, + 0.167, + 0.344, + 0.175 + ], + "angle": 0, + "content": "B: The second one." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.18, + 0.252, + 0.188 + ], + "angle": 0, + "content": "C: The third one." + }, + { + "type": "text", + "bbox": [ + 0.266, + 0.18, + 0.34, + 0.188 + ], + "angle": 0, + "content": "D: The fourth one" + }, + { + "type": "text", + "bbox": [ + 0.444, + 0.18, + 0.475, + 0.188 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.194, + 0.375, + 0.203 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "title", + "bbox": [ + 0.309, + 0.213, + 0.353, + 0.222 + ], + "angle": 0, + "content": "GPT4o: A" + }, + { + "type": "title", + "bbox": [ + 0.29, + 0.231, + 0.371, + 0.24 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.25, + 0.216, + 0.258 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.259, + 0.464, + 0.275 + ], + "angle": 0, + "content": "To determine which strip has the third reddest color, let's analyze each strip:" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.276, + 0.385, + 0.284 + ], + "angle": 0, + "content": "1. First strip: An orange color, which has some red." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.285, + 0.406, + 0.293 + ], + "angle": 0, + "content": "2. Second strip: A cream color, which has little to no red." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.293, + 0.4, + 0.301 + ], + "angle": 0, + "content": "3. Third strip: A light beige color, with a hint of red." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.302, + 0.384, + 0.309 + ], + "angle": 0, + "content": "4. Fourth strip: A deep red color, with the most red." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.31, + 0.471, + 0.325 + ], + "angle": 0, + "content": "Based on these observations, the order of redness from most to least is: 1. Fourth strip" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.327, + 0.231, + 0.334 + ], + "angle": 0, + "content": "2. First strip" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.335, + 0.231, + 0.343 + ], + "angle": 0, + "content": "3. Third strip" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.343, + 0.243, + 0.351 + ], + "angle": 0, + "content": "4. Second strip" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.352, + 0.452, + 0.368 + ], + "angle": 0, + "content": "Therefore, the strip with the third reddest color is the third strip. The correct option is (C)" + }, + { + "type": "list", + "bbox": [ + 0.182, + 0.259, + 0.471, + 0.368 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.379, + 0.49, + 0.405 + ], + "angle": 0, + "content": "Figure 52: Case with CoT for Color Comparison task." + }, + { + "type": "title", + "bbox": [ + 0.272, + 0.432, + 0.39, + 0.447 + ], + "angle": 0, + "content": "Object Counting" + }, + { + "type": "image", + "bbox": [ + 0.197, + 0.451, + 0.277, + 0.513 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.459, + 0.475, + 0.481 + ], + "angle": 0, + "content": "How many green strawberries are on the tree in this image?" + }, + { + "type": "text", + "bbox": [ + 0.299, + 0.483, + 0.41, + 0.492 + ], + "angle": 0, + "content": "A:11 B:13 C:18" + }, + { + "type": "text", + "bbox": [ + 0.299, + 0.496, + 0.371, + 0.504 + ], + "angle": 0, + "content": "D:12 E:17" + }, + { + "type": "text", + "bbox": [ + 0.443, + 0.496, + 0.475, + 0.504 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.516, + 0.375, + 0.525 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.535, + 0.352, + 0.544 + ], + "angle": 0, + "content": "GPT4o: B" + }, + { + "type": "text", + "bbox": [ + 0.29, + 0.553, + 0.371, + 0.563 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.573, + 0.216, + 0.58 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.582, + 0.464, + 0.598 + ], + "angle": 0, + "content": "To determine how many green strawberries are on the tree, let's count each visible green one." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.599, + 0.465, + 0.614 + ], + "angle": 0, + "content": "1. Start from the left side of the tree and move toward the right, looking from top to bottom." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.615, + 0.424, + 0.624 + ], + "angle": 0, + "content": "2. Carefully look for strawberries that are a solid green color." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.624, + 0.424, + 0.632 + ], + "angle": 0, + "content": "By doing this, I can count the green strawberries on the tree." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.633, + 0.238, + 0.64 + ], + "angle": 0, + "content": "After counting" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.641, + 0.314, + 0.648 + ], + "angle": 0, + "content": "There are 12 green strawberries." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.649, + 0.296, + 0.657 + ], + "angle": 0, + "content": "The correct choice is (D) 12." + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.668, + 0.487, + 0.695 + ], + "angle": 0, + "content": "Figure 54: Case with CoT for Object Counting task." + }, + { + "type": "title", + "bbox": [ + 0.612, + 0.109, + 0.723, + 0.124 + ], + "angle": 0, + "content": "Color Counting" + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.135, + 0.647, + 0.183 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.653, + 0.136, + 0.8, + 0.158 + ], + "angle": 0, + "content": "How many colors are there in this image?" + }, + { + "type": "text", + "bbox": [ + 0.653, + 0.16, + 0.714, + 0.169 + ], + "angle": 0, + "content": "A:5 B:6" + }, + { + "type": "text", + "bbox": [ + 0.653, + 0.174, + 0.713, + 0.182 + ], + "angle": 0, + "content": "C:7 D:8" + }, + { + "type": "text", + "bbox": [ + 0.774, + 0.174, + 0.805, + 0.182 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "text", + "bbox": [ + 0.623, + 0.194, + 0.711, + 0.203 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.213, + 0.69, + 0.222 + ], + "angle": 0, + "content": "GPT4o: C" + }, + { + "type": "text", + "bbox": [ + 0.626, + 0.231, + 0.707, + 0.24 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.25, + 0.551, + 0.258 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.259, + 0.813, + 0.274 + ], + "angle": 0, + "content": "To determine the number of colors in the image, let's analyze the different areas:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.274, + 0.681, + 0.282 + ], + "angle": 0, + "content": "1. Red - used for \"Homo sapiens\" areas." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.283, + 0.726, + 0.292 + ], + "angle": 0, + "content": "2. Yellow - used for \"Homo neanderthalensis\" areas" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.293, + 0.678, + 0.3 + ], + "angle": 0, + "content": "3. Blue - used for the oceans and seas." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.301, + 0.762, + 0.309 + ], + "angle": 0, + "content": "4. White - used for land areas not highlighted by other colors." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.31, + 0.666, + 0.317 + ], + "angle": 0, + "content": "5. Black - used for text and numbers." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.318, + 0.732, + 0.326 + ], + "angle": 0, + "content": "6. Brown - used for arrows indicating migration paths." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.327, + 0.675, + 0.334 + ], + "angle": 0, + "content": "Thus, there are six colors in the image." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.335, + 0.666, + 0.342 + ], + "angle": 0, + "content": "Therefore, the correct option is (B) 6." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.259, + 0.813, + 0.342 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.353, + 0.825, + 0.38 + ], + "angle": 0, + "content": "Figure 53: Case with CoT for Color Counting task." + }, + { + "type": "title", + "bbox": [ + 0.618, + 0.432, + 0.716, + 0.445 + ], + "angle": 0, + "content": "Color Illusion" + }, + { + "type": "image", + "bbox": [ + 0.586, + 0.454, + 0.749, + 0.479 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.52, + 0.483, + 0.72, + 0.492 + ], + "angle": 0, + "content": "Does the horizontal bar have a uniform color?" + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.495, + 0.665, + 0.504 + ], + "angle": 0, + "content": "A:Hard to tell B:Yes C:No" + }, + { + "type": "text", + "bbox": [ + 0.768, + 0.495, + 0.8, + 0.504 + ], + "angle": 0, + "content": "Ans:B" + }, + { + "type": "text", + "bbox": [ + 0.623, + 0.516, + 0.711, + 0.525 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.535, + 0.69, + 0.544 + ], + "angle": 0, + "content": "GPT4o: C" + }, + { + "type": "text", + "bbox": [ + 0.626, + 0.553, + 0.707, + 0.562 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.573, + 0.551, + 0.581 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.582, + 0.794, + 0.607 + ], + "angle": 0, + "content": "First, observe the horizontal bar and its surroundings. Notice that the background is a gradient transitioning from dark brown on the left to light beige on the right." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.608, + 0.811, + 0.623 + ], + "angle": 0, + "content": "Now focus on the bar itself. To determine if it has a uniform color, look for any signs of gradient or color changes within the bar." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.624, + 0.791, + 0.648 + ], + "angle": 0, + "content": "Despite the gradient background, the bar remains a consistent color throughout its length because there are no visible changes or gradients within it." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.649, + 0.715, + 0.657 + ], + "angle": 0, + "content": "Therefore, the horizontal bar has a uniform color." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.658, + 0.659, + 0.665 + ], + "angle": 0, + "content": "Thus, the correct choice is (B) Yes." + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.674, + 0.824, + 0.701 + ], + "angle": 0, + "content": "Figure 55: Case with CoT for Color Illusion task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "31" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.278, + 0.109, + 0.383, + 0.124 + ], + "angle": 0, + "content": "Color Mimicry" + }, + { + "type": "image", + "bbox": [ + 0.184, + 0.129, + 0.29, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.135, + 0.445, + 0.146 + ], + "angle": 0, + "content": "How many leaves in this images?" + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.148, + 0.316, + 0.157 + ], + "angle": 0, + "content": "A:1" + }, + { + "type": "text", + "bbox": [ + 0.34, + 0.148, + 0.359, + 0.156 + ], + "angle": 0, + "content": "B:2" + }, + { + "type": "text", + "bbox": [ + 0.341, + 0.161, + 0.359, + 0.169 + ], + "angle": 0, + "content": "D:0" + }, + { + "type": "text", + "bbox": [ + 0.434, + 0.161, + 0.465, + 0.169 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.194, + 0.375, + 0.203 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "title", + "bbox": [ + 0.309, + 0.213, + 0.352, + 0.222 + ], + "angle": 0, + "content": "GPT4o: B" + }, + { + "type": "title", + "bbox": [ + 0.29, + 0.231, + 0.371, + 0.24 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.25, + 0.217, + 0.257 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.259, + 0.376, + 0.267 + ], + "angle": 0, + "content": "To determine the number of leaves in the image:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.268, + 0.476, + 0.283 + ], + "angle": 0, + "content": "1. Examine the main elements visible, including the branches and objects attached." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.285, + 0.436, + 0.299 + ], + "angle": 0, + "content": "2. Identify any elements resembling leaves in shape, texture, or placement." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.301, + 0.477, + 0.317 + ], + "angle": 0, + "content": "3. Consider any object that might mimic the appearance of a leaf, such as a leaf-tailed gecko." + }, + { + "type": "list", + "bbox": [ + 0.182, + 0.259, + 0.477, + 0.317 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.318, + 0.429, + 0.334 + ], + "angle": 0, + "content": "Upon close analysis, the object resembling a leaf is actually a camouflaged animal." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.335, + 0.394, + 0.342 + ], + "angle": 0, + "content": "Therefore, no actual leaves are present in the image." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.343, + 0.289, + 0.35 + ], + "angle": 0, + "content": "The correct option is (D) 0." + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.364, + 0.489, + 0.392 + ], + "angle": 0, + "content": "Figure 56: Case with CoT for Color Mimicry task." + }, + { + "type": "title", + "bbox": [ + 0.607, + 0.11, + 0.72, + 0.122 + ], + "angle": 0, + "content": "Color Blindness" + }, + { + "type": "image", + "bbox": [ + 0.532, + 0.13, + 0.61, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.629, + 0.135, + 0.803, + 0.158 + ], + "angle": 0, + "content": "What is the number in the center of this image?" + }, + { + "type": "text", + "bbox": [ + 0.63, + 0.161, + 0.678, + 0.17 + ], + "angle": 0, + "content": "A:7" + }, + { + "type": "text", + "bbox": [ + 0.63, + 0.173, + 0.678, + 0.182 + ], + "angle": 0, + "content": "C:21" + }, + { + "type": "text", + "bbox": [ + 0.774, + 0.173, + 0.805, + 0.181 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.619, + 0.193, + 0.707, + 0.203 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "title", + "bbox": [ + 0.641, + 0.213, + 0.685, + 0.222 + ], + "angle": 0, + "content": "GPT4o: C" + }, + { + "type": "title", + "bbox": [ + 0.623, + 0.231, + 0.703, + 0.24 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.25, + 0.548, + 0.257 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.259, + 0.8, + 0.267 + ], + "angle": 0, + "content": "To determine the number in the center of the image, follow these steps:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.268, + 0.806, + 0.293 + ], + "angle": 0, + "content": "1. Focus on contrasting colors: The image contains a pattern of numbers made up of dots in different colors. Identify the colors forming the number." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.294, + 0.809, + 0.317 + ], + "angle": 0, + "content": "2. Recognize patterns: Pay attention to the shape formed by these colors. 3. Identify the number: Once the shape is recognized, compare it with the given options." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.259, + 0.809, + 0.317 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.318, + 0.803, + 0.334 + ], + "angle": 0, + "content": "The number in the center of the image is 57. Thus, the correct choice is: (D) 57" + }, + { + "type": "image_caption", + "bbox": [ + 0.505, + 0.355, + 0.821, + 0.382 + ], + "angle": 0, + "content": "Figure 57: Case with CoT for Color Blindness task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "32" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.184, + 0.824, + 0.793 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.315, + 0.8, + 0.683, + 0.815 + ], + "angle": 0, + "content": "Figure 58: Case with CoT for Color Robustness task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "33" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.828, + 0.176 + ], + "angle": 0, + "content": "However, for Color Recognition and Object Recognition tasks, the improvement of involving slow thinking is limited, as these two tasks heavily rely on the accurate cognition of the vision encoder. The sample cases are shown in Figure 59 and 60. For Color Illusion task, adding reasoning steps causes the model to focus more on the misleading environment and the relationship between the environment and the foreground objects. This thinking negatively influences the model performance. A sample case is shown by Figure 61." + }, + { + "type": "title", + "bbox": [ + 0.266, + 0.198, + 0.395, + 0.213 + ], + "angle": 0, + "content": "Color Recognition" + }, + { + "type": "image", + "bbox": [ + 0.184, + 0.219, + 0.289, + 0.28 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.296, + 0.225, + 0.47, + 0.247 + ], + "angle": 0, + "content": "Which color does not exist on the trunk of the tree in this image?" + }, + { + "type": "text", + "bbox": [ + 0.297, + 0.25, + 0.39, + 0.259 + ], + "angle": 0, + "content": "A: Purple B: Green" + }, + { + "type": "text", + "bbox": [ + 0.298, + 0.263, + 0.335, + 0.27 + ], + "angle": 0, + "content": "C:White" + }, + { + "type": "text", + "bbox": [ + 0.35, + 0.263, + 0.39, + 0.27 + ], + "angle": 0, + "content": "D:Yellow" + }, + { + "type": "text", + "bbox": [ + 0.442, + 0.263, + 0.475, + 0.271 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.283, + 0.375, + 0.292 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.307, + 0.303, + 0.354, + 0.311 + ], + "angle": 0, + "content": "GPT-40: B" + }, + { + "type": "title", + "bbox": [ + 0.29, + 0.321, + 0.371, + 0.33 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.34, + 0.216, + 0.347 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.181, + 0.349, + 0.355, + 0.357 + ], + "angle": 0, + "content": "Let's examine the colors on the tree trunks:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.358, + 0.321, + 0.365 + ], + "angle": 0, + "content": "- Purple: Visible on the tree trunks." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.366, + 0.32, + 0.373 + ], + "angle": 0, + "content": "- Green: Visible on the tree trunks." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.374, + 0.323, + 0.381 + ], + "angle": 0, + "content": "- White: Visible on the tree trunks." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.382, + 0.32, + 0.389 + ], + "angle": 0, + "content": "- Yellow: Visible on the tree trunks." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.391, + 0.461, + 0.415 + ], + "angle": 0, + "content": "Since all the colors are present, there seems to be an error. Based on visible presence, each color mentioned is present. If missing one is necessary, no color listed fulfills that distinction." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.416, + 0.461, + 0.449 + ], + "angle": 0, + "content": "Thus, upon reevaluation, there is no correct answer because all listed colors exist on the trunks. However, for choice completion and oversight check, let's assume possibility and choose (C) White incorrectly, for inclusion error." + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.459, + 0.49, + 0.487 + ], + "angle": 0, + "content": "Figure 59: Case that CoT results in an incorrect answer for Color Recognition task." + }, + { + "type": "title", + "bbox": [ + 0.282, + 0.513, + 0.38, + 0.525 + ], + "angle": 0, + "content": "Color Illusion" + }, + { + "type": "image", + "bbox": [ + 0.255, + 0.533, + 0.406, + 0.558 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.559, + 0.343, + 0.567 + ], + "angle": 0, + "content": "Which circles has the darkest color?" + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.571, + 0.412, + 0.58 + ], + "angle": 0, + "content": "The circles are numbered left to right starting from 1." + }, + { + "type": "text", + "bbox": [ + 0.183, + 0.584, + 0.246, + 0.592 + ], + "angle": 0, + "content": "A: All the same" + }, + { + "type": "text", + "bbox": [ + 0.265, + 0.584, + 0.28, + 0.592 + ], + "angle": 0, + "content": "B:1" + }, + { + "type": "text", + "bbox": [ + 0.301, + 0.584, + 0.353, + 0.592 + ], + "angle": 0, + "content": "C:2 D:3" + }, + { + "type": "text", + "bbox": [ + 0.443, + 0.584, + 0.475, + 0.592 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.286, + 0.597, + 0.375, + 0.605 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.616, + 0.352, + 0.624 + ], + "angle": 0, + "content": "GPT40:D" + }, + { + "type": "title", + "bbox": [ + 0.29, + 0.635, + 0.371, + 0.643 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.182, + 0.654, + 0.216, + 0.66 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.662, + 0.47, + 0.686 + ], + "angle": 0, + "content": "To determine which circle has the darkest color, let's examine each one: 1. Circle 1: This circle is on the left side of the gradient and appears darker." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.687, + 0.467, + 0.711 + ], + "angle": 0, + "content": "2. Circle 2: This circle is in the middle and appears lighter than Circle 1. \n3. Circle 3: This circle is on the right side of the gradient and appears lighter than Circle 1." + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.712, + 0.444, + 0.72 + ], + "angle": 0, + "content": "Circle 1 has the darkest color. Therefore, the correct option is (B)." + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.73, + 0.49, + 0.758 + ], + "angle": 0, + "content": "Figure 61: Case that CoT results in an incorrect answer for Color Illusion task." + }, + { + "type": "title", + "bbox": [ + 0.598, + 0.198, + 0.735, + 0.213 + ], + "angle": 0, + "content": "Object Recognition" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.219, + 0.619, + 0.28 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.633, + 0.225, + 0.807, + 0.247 + ], + "angle": 0, + "content": "Which country does not have a color of yellow in this image?" + }, + { + "type": "text", + "bbox": [ + 0.635, + 0.25, + 0.731, + 0.259 + ], + "angle": 0, + "content": "A: Russia B: India" + }, + { + "type": "text", + "bbox": [ + 0.635, + 0.263, + 0.741, + 0.271 + ], + "angle": 0, + "content": "C: Korean D: Korean" + }, + { + "type": "text", + "bbox": [ + 0.779, + 0.264, + 0.81, + 0.271 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.623, + 0.283, + 0.711, + 0.292 + ], + "angle": 0, + "content": "Answer Before CoT:" + }, + { + "type": "text", + "bbox": [ + 0.646, + 0.303, + 0.689, + 0.311 + ], + "angle": 0, + "content": "GPT4o: A" + }, + { + "type": "title", + "bbox": [ + 0.627, + 0.321, + 0.706, + 0.33 + ], + "angle": 0, + "content": "Answer After CoT:" + }, + { + "type": "title", + "bbox": [ + 0.518, + 0.34, + 0.551, + 0.347 + ], + "angle": 0, + "content": "GPT40:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.349, + 0.809, + 0.365 + ], + "angle": 0, + "content": "To determine which country does not have the color yellow in this image, let's go through the options:" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.366, + 0.684, + 0.373 + ], + "angle": 0, + "content": "- (A) Russia: The color for Russia is blue." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.374, + 0.674, + 0.382 + ], + "angle": 0, + "content": "- (B) India: The color for India is yellow." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.383, + 0.788, + 0.398 + ], + "angle": 0, + "content": "- (C) Korean: This likely refers to North and South Korea, which are shown in orange and red respectively, not yellow." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.399, + 0.746, + 0.406 + ], + "angle": 0, + "content": "- (D) Thailand: The color for Thailand is shown as yellow." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.407, + 0.808, + 0.423 + ], + "angle": 0, + "content": "From this analysis, the country without yellow is Korea (North and South Korea)." + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.424, + 0.618, + 0.432 + ], + "angle": 0, + "content": "The correct option is (C)." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.349, + 0.809, + 0.432 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.509, + 0.445, + 0.827, + 0.473 + ], + "angle": 0, + "content": "Figure 60: Case that CoT results in an incorrect answer for Object Recognition task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "34" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.092, + 0.355, + 0.106 + ], + "angle": 0, + "content": "M.2 Effect of Grayscale" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.205, + 0.826, + 0.233 + ], + "angle": 0, + "content": "For most of the tasks in COLORBENCH, colors are critical clues for VLMs to generate the answers. We highlight these cases in Figure 62 to 69." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.239, + 0.825, + 0.268 + ], + "angle": 0, + "content": "However, for Color Illusion and Color Mimicry tasks, color clues might mislead VLMs to wrong answers, as shown in Figure 70 and 71." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.283, + 0.486, + 0.441 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.449, + 0.489, + 0.478 + ], + "angle": 0, + "content": "Figure 62: Color clues play as a critical role for Color Recognition task." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.283, + 0.824, + 0.441 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.449, + 0.825, + 0.491 + ], + "angle": 0, + "content": "Figure 63: Color clues play as a critical role for Color Extraction task. Option backgrounds correspond to their color codes." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.509, + 0.486, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.674, + 0.489, + 0.703 + ], + "angle": 0, + "content": "Figure 64: Color clues play as a critical role for Object Recognition task." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.509, + 0.824, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.674, + 0.825, + 0.703 + ], + "angle": 0, + "content": "Figure 65: Color clues play as a critical role for Color Proportion task." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.72, + 0.486, + 0.878 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.885, + 0.489, + 0.915 + ], + "angle": 0, + "content": "Figure 66: Color clues play as a critical role for Color Comparison task." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.72, + 0.824, + 0.878 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.885, + 0.825, + 0.915 + ], + "angle": 0, + "content": "Figure 67: Color clues play as a critical role for Color Counting task." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "35" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.102, + 0.487, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.267, + 0.489, + 0.297 + ], + "angle": 0, + "content": "Figure 68: Color clues play as a critical role for Object Counting task." + }, + { + "type": "image", + "bbox": [ + 0.507, + 0.102, + 0.821, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.504, + 0.267, + 0.822, + 0.295 + ], + "angle": 0, + "content": "Figure 69: Color clues play as a critical role for Color Blindness task." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.304, + 0.489, + 0.462 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.469, + 0.489, + 0.497 + ], + "angle": 0, + "content": "Figure 70: Color clues negatively affect VLMs prediction for Color Illusion task." + }, + { + "type": "image", + "bbox": [ + 0.507, + 0.304, + 0.821, + 0.462 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.504, + 0.469, + 0.821, + 0.498 + ], + "angle": 0, + "content": "Figure 71: Color clues negatively affect VLMs prediction for Color Mimicry task." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.513, + 0.427, + 0.527 + ], + "angle": 0, + "content": "M.3 Failure with LLM and Vision" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.539, + 0.828, + 0.637 + ], + "angle": 0, + "content": "We present a representative failure case that highlights limitations in both the vision and language components of the model. As shown in Figure 72, the model fails to correctly interpret the visual content—it misidentifies the target colors by focusing on pink and purple flowers instead of red and yellow ones, indicating a vision encoder error. Furthermore, the language model compounds this mistake by generating an incorrect chain-of-thought reasoning and arriving at an erroneous answer based on the wrong color categories. This example underscores the necessity of evaluating both visual perception and language reasoning when diagnosing failure modes in vision-language models." + }, + { + "type": "image", + "bbox": [ + 0.342, + 0.649, + 0.656, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.205, + 0.88, + 0.792, + 0.896 + ], + "angle": 0, + "content": "Figure 72: Case that model fails because of both vision encoder and language model." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "36" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.092, + 0.3, + 0.107 + ], + "angle": 0, + "content": "M.4 Easy Cases" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.245, + 0.67, + 0.26 + ], + "angle": 0, + "content": "We present samples cases that majority of VLMs reach the correct answers." + }, + { + "type": "title", + "bbox": [ + 0.266, + 0.283, + 0.396, + 0.298 + ], + "angle": 0, + "content": "Color Recognition" + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.304, + 0.304, + 0.363 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.309, + 0.456, + 0.332 + ], + "angle": 0, + "content": "What color does not exist in this image?" + }, + { + "type": "text", + "bbox": [ + 0.314, + 0.334, + 0.407, + 0.342 + ], + "angle": 0, + "content": "A:Green B:White" + }, + { + "type": "text", + "bbox": [ + 0.315, + 0.346, + 0.406, + 0.356 + ], + "angle": 0, + "content": "C:Red D:Black" + }, + { + "type": "list", + "bbox": [ + 0.314, + 0.334, + 0.407, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.269, + 0.367, + 0.394, + 0.377 + ], + "angle": 0, + "content": "100% (32/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.39, + 0.492, + 0.42 + ], + "angle": 0, + "content": "Figure 73: Color Recognition case that majority of VLMs provide correct results." + }, + { + "type": "title", + "bbox": [ + 0.262, + 0.457, + 0.399, + 0.473 + ], + "angle": 0, + "content": "Object Recognition" + }, + { + "type": "image", + "bbox": [ + 0.176, + 0.477, + 0.238, + 0.508 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.24, + 0.477, + 0.305, + 0.51 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.484, + 0.468, + 0.505 + ], + "angle": 0, + "content": "Which object has a color of green in this image?" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.51, + 0.397, + 0.519 + ], + "angle": 0, + "content": "A:Flower B: Sky" + }, + { + "type": "text", + "bbox": [ + 0.31, + 0.521, + 0.404, + 0.53 + ], + "angle": 0, + "content": "C:Leave D:River" + }, + { + "type": "list", + "bbox": [ + 0.309, + 0.51, + 0.404, + 0.53 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.264, + 0.542, + 0.398, + 0.552 + ], + "angle": 0, + "content": "93.75% (30/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.565, + 0.49, + 0.594 + ], + "angle": 0, + "content": "Figure 75: Object Recognition case that majority of VLMs provide correct results." + }, + { + "type": "title", + "bbox": [ + 0.265, + 0.618, + 0.396, + 0.634 + ], + "angle": 0, + "content": "Color Comparison" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.638, + 0.293, + 0.7 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.644, + 0.48, + 0.654 + ], + "angle": 0, + "content": "Which image is cooler in overall color?" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.657, + 0.371, + 0.665 + ], + "angle": 0, + "content": "A: The left one" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.67, + 0.376, + 0.679 + ], + "angle": 0, + "content": "B: The right one" + }, + { + "type": "list", + "bbox": [ + 0.309, + 0.657, + 0.376, + 0.679 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.264, + 0.703, + 0.398, + 0.713 + ], + "angle": 0, + "content": "81.25% (26/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.725, + 0.49, + 0.755 + ], + "angle": 0, + "content": "Figure 77: Color Comparison case that majority of VLMs provide correct results." + }, + { + "type": "title", + "bbox": [ + 0.277, + 0.778, + 0.384, + 0.794 + ], + "angle": 0, + "content": "Color Mimicry" + }, + { + "type": "image", + "bbox": [ + 0.186, + 0.798, + 0.299, + 0.856 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.804, + 0.451, + 0.815 + ], + "angle": 0, + "content": "How many frogs in this images?" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.83, + 0.321, + 0.839 + ], + "angle": 0, + "content": "A:" + }, + { + "type": "text", + "bbox": [ + 0.346, + 0.83, + 0.364, + 0.839 + ], + "angle": 0, + "content": "B:2" + }, + { + "type": "text", + "bbox": [ + 0.31, + 0.843, + 0.328, + 0.852 + ], + "angle": 0, + "content": "C:3" + }, + { + "type": "text", + "bbox": [ + 0.346, + 0.843, + 0.365, + 0.852 + ], + "angle": 0, + "content": "D:0" + }, + { + "type": "text", + "bbox": [ + 0.445, + 0.842, + 0.477, + 0.852 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.264, + 0.863, + 0.398, + 0.873 + ], + "angle": 0, + "content": "93.75% (30/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.885, + 0.489, + 0.915 + ], + "angle": 0, + "content": "Figure 79: Color Mimicry case that majority of VLMs provide correct results." + }, + { + "type": "title", + "bbox": [ + 0.607, + 0.283, + 0.728, + 0.297 + ], + "angle": 0, + "content": "Color Extraction" + }, + { + "type": "image", + "bbox": [ + 0.525, + 0.304, + 0.605, + 0.364 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.618, + 0.309, + 0.807, + 0.332 + ], + "angle": 0, + "content": "What is the RGB value of the given color in the image?" + }, + { + "type": "text", + "bbox": [ + 0.62, + 0.334, + 0.657, + 0.343 + ], + "angle": 0, + "content": "A: [255, 0]" + }, + { + "type": "text", + "bbox": [ + 0.665, + 0.334, + 0.754, + 0.343 + ], + "angle": 0, + "content": "123] B:[255,5,134]" + }, + { + "type": "text", + "bbox": [ + 0.62, + 0.347, + 0.657, + 0.356 + ], + "angle": 0, + "content": "C: [255, C]" + }, + { + "type": "text", + "bbox": [ + 0.665, + 0.347, + 0.761, + 0.356 + ], + "angle": 0, + "content": "128] D: [130, 22, 121]" + }, + { + "type": "list", + "bbox": [ + 0.62, + 0.334, + 0.761, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.72, + 0.347, + 0.732, + 0.356 + ], + "angle": 0, + "content": "0,2" + }, + { + "type": "text", + "bbox": [ + 0.752, + 0.347, + 0.764, + 0.356 + ], + "angle": 0, + "content": "[1]" + }, + { + "type": "text", + "bbox": [ + 0.779, + 0.347, + 0.81, + 0.356 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "text", + "bbox": [ + 0.604, + 0.367, + 0.73, + 0.377 + ], + "angle": 0, + "content": "100% (32/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.39, + 0.827, + 0.434 + ], + "angle": 0, + "content": "Figure 74: Color Extraction case that majority of VLMs provide correct results. Option backgrounds correspond to their color codes." + }, + { + "type": "title", + "bbox": [ + 0.606, + 0.457, + 0.728, + 0.473 + ], + "angle": 0, + "content": "Color Proportion" + }, + { + "type": "image", + "bbox": [ + 0.529, + 0.477, + 0.629, + 0.539 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.641, + 0.49, + 0.8, + 0.513 + ], + "angle": 0, + "content": "Which is the dominant colors in this painting?" + }, + { + "type": "text", + "bbox": [ + 0.642, + 0.515, + 0.811, + 0.524 + ], + "angle": 0, + "content": "A:Warm B:Cool Ans:B" + }, + { + "type": "text", + "bbox": [ + 0.601, + 0.542, + 0.734, + 0.552 + ], + "angle": 0, + "content": "84.38% (27/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.565, + 0.827, + 0.594 + ], + "angle": 0, + "content": "Figure 76: Color Proportion case that majority of VLMs provide correct results." + }, + { + "type": "title", + "bbox": [ + 0.608, + 0.618, + 0.728, + 0.634 + ], + "angle": 0, + "content": "Object Counting" + }, + { + "type": "image", + "bbox": [ + 0.52, + 0.638, + 0.639, + 0.7 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.644, + 0.804, + 0.666 + ], + "angle": 0, + "content": "How many cows have white faces in this image?" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.669, + 0.701, + 0.678 + ], + "angle": 0, + "content": "A:3 B:5" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.682, + 0.701, + 0.691 + ], + "angle": 0, + "content": "C:2 D:4" + }, + { + "type": "list", + "bbox": [ + 0.645, + 0.669, + 0.701, + 0.691 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.601, + 0.702, + 0.734, + 0.712 + ], + "angle": 0, + "content": "93.75% (30/32) Models Correct" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.725, + 0.825, + 0.755 + ], + "angle": 0, + "content": "Figure 78: Object Counting case that majority of VLMs provide correct results." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "37" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.269, + 0.109, + 0.393, + 0.122 + ], + "angle": 0, + "content": "Color Robustness" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.129, + 0.293, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.135, + 0.449, + 0.157 + ], + "angle": 0, + "content": "How many surfboards are in the image?" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.16, + 0.364, + 0.17 + ], + "angle": 0, + "content": "A:0 B:1" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.172, + 0.365, + 0.182 + ], + "angle": 0, + "content": "C:3 D:2" + }, + { + "type": "text", + "bbox": [ + 0.445, + 0.173, + 0.477, + 0.182 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "text", + "bbox": [ + 0.232, + 0.193, + 0.429, + 0.204 + ], + "angle": 0, + "content": "96.88% (31/32) Model Predictions Unchanged" + }, + { + "type": "image", + "bbox": [ + 0.176, + 0.207, + 0.486, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.395, + 0.49, + 0.438 + ], + "angle": 0, + "content": "Figure 80: Color Robustness case that majority of VLMs provide unchanged results over color variations in images." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "38" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.092, + 0.323, + 0.106 + ], + "angle": 0, + "content": "M.5 Difficult Cases" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.245, + 0.681, + 0.26 + ], + "angle": 0, + "content": "We present samples cases that majority of VLMs reach the incorrect answers." + }, + { + "type": "title", + "bbox": [ + 0.266, + 0.283, + 0.396, + 0.298 + ], + "angle": 0, + "content": "Color Recognition" + }, + { + "type": "image", + "bbox": [ + 0.186, + 0.303, + 0.285, + 0.365 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.313, + 0.309, + 0.471, + 0.332 + ], + "angle": 0, + "content": "What color of balloon is not present in this image?" + }, + { + "type": "text", + "bbox": [ + 0.314, + 0.334, + 0.402, + 0.343 + ], + "angle": 0, + "content": "A:Yellow B:Red" + }, + { + "type": "text", + "bbox": [ + 0.315, + 0.346, + 0.415, + 0.356 + ], + "angle": 0, + "content": "C:Green D:Orange" + }, + { + "type": "list", + "bbox": [ + 0.314, + 0.334, + 0.415, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.442, + 0.348, + 0.475, + 0.356 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.367, + 0.401, + 0.377 + ], + "angle": 0, + "content": "81.25% (26/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.39, + 0.49, + 0.419 + ], + "angle": 0, + "content": "Figure 81: Color Recognition case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.262, + 0.457, + 0.399, + 0.472 + ], + "angle": 0, + "content": "Object Recognition" + }, + { + "type": "image", + "bbox": [ + 0.184, + 0.477, + 0.307, + 0.539 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.484, + 0.461, + 0.506 + ], + "angle": 0, + "content": "Which state is not light pink in this image?" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.509, + 0.376, + 0.518 + ], + "angle": 0, + "content": "A:ID B:OK" + }, + { + "type": "text", + "bbox": [ + 0.309, + 0.522, + 0.381, + 0.53 + ], + "angle": 0, + "content": "C:TX D:MO" + }, + { + "type": "text", + "bbox": [ + 0.441, + 0.522, + 0.473, + 0.53 + ], + "angle": 0, + "content": "Ans: B" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.542, + 0.4, + 0.552 + ], + "angle": 0, + "content": "93.75% (30/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.565, + 0.49, + 0.594 + ], + "angle": 0, + "content": "Figure 83: Object Recognition case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.265, + 0.618, + 0.396, + 0.633 + ], + "angle": 0, + "content": "Color Comparison" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.639, + 0.291, + 0.699 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.3, + 0.645, + 0.472, + 0.655 + ], + "angle": 0, + "content": "Which species of wood has the darkest" + }, + { + "type": "text", + "bbox": [ + 0.301, + 0.657, + 0.417, + 0.665 + ], + "angle": 0, + "content": "color overall in the image?" + }, + { + "type": "text", + "bbox": [ + 0.302, + 0.669, + 0.402, + 0.679 + ], + "angle": 0, + "content": "A: Mohogany B: Maple" + }, + { + "type": "text", + "bbox": [ + 0.302, + 0.682, + 0.478, + 0.692 + ], + "angle": 0, + "content": "C: Cherry D: Black Walnut Ans:A" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.703, + 0.401, + 0.713 + ], + "angle": 0, + "content": "93.75% (30/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.725, + 0.49, + 0.754 + ], + "angle": 0, + "content": "Figure 85: Color Comparison case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.271, + 0.779, + 0.391, + 0.794 + ], + "angle": 0, + "content": "Object Counting" + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.798, + 0.323, + 0.856 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.33, + 0.805, + 0.464, + 0.815 + ], + "angle": 0, + "content": "How many people are wearing" + }, + { + "type": "text", + "bbox": [ + 0.331, + 0.817, + 0.471, + 0.826 + ], + "angle": 0, + "content": "red striped shirts in this image?" + }, + { + "type": "text", + "bbox": [ + 0.331, + 0.829, + 0.439, + 0.839 + ], + "angle": 0, + "content": "A:10 B:15 C:12" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.843, + 0.478, + 0.852 + ], + "angle": 0, + "content": "D:14 E:13 Ans:B" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.863, + 0.4, + 0.873 + ], + "angle": 0, + "content": "84.38% (27/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.885, + 0.489, + 0.915 + ], + "angle": 0, + "content": "Figure 87: Object Counting case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.607, + 0.283, + 0.728, + 0.297 + ], + "angle": 0, + "content": "Color Extraction" + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.304, + 0.6, + 0.364 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.607, + 0.309, + 0.812, + 0.332 + ], + "angle": 0, + "content": "What is the RGB value of the given color in the image?" + }, + { + "type": "text", + "bbox": [ + 0.608, + 0.334, + 0.682, + 0.344 + ], + "angle": 0, + "content": "A: [121, 151, 181]" + }, + { + "type": "text", + "bbox": [ + 0.608, + 0.347, + 0.681, + 0.356 + ], + "angle": 0, + "content": "C: [123, 150, 181]" + }, + { + "type": "text", + "bbox": [ + 0.689, + 0.334, + 0.753, + 0.344 + ], + "angle": 0, + "content": "B: [55, 32, 102]" + }, + { + "type": "text", + "bbox": [ + 0.69, + 0.347, + 0.763, + 0.356 + ], + "angle": 0, + "content": "D: [119, 150, 181]" + }, + { + "type": "text", + "bbox": [ + 0.779, + 0.348, + 0.81, + 0.356 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "text", + "bbox": [ + 0.597, + 0.367, + 0.737, + 0.377 + ], + "angle": 0, + "content": "84.38% (27/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.39, + 0.825, + 0.433 + ], + "angle": 0, + "content": "Figure 82: Color Extraction case that majority of VLMs provide incorrect results. Option backgrounds correspond to their color codes." + }, + { + "type": "title", + "bbox": [ + 0.606, + 0.457, + 0.728, + 0.472 + ], + "angle": 0, + "content": "Color Proportion" + }, + { + "type": "image", + "bbox": [ + 0.539, + 0.481, + 0.611, + 0.536 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.642, + 0.485, + 0.792, + 0.508 + ], + "angle": 0, + "content": "What color in the pie chart has the proportion closest to \\(20\\%\\)?" + }, + { + "type": "text", + "bbox": [ + 0.642, + 0.511, + 0.746, + 0.52 + ], + "angle": 0, + "content": "A: dark green B: purple" + }, + { + "type": "text", + "bbox": [ + 0.643, + 0.523, + 0.71, + 0.532 + ], + "angle": 0, + "content": "C:orange" + }, + { + "type": "text", + "bbox": [ + 0.707, + 0.523, + 0.811, + 0.532 + ], + "angle": 0, + "content": "D:light pink Ans:A" + }, + { + "type": "text", + "bbox": [ + 0.597, + 0.542, + 0.737, + 0.552 + ], + "angle": 0, + "content": "87.50% (28/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.565, + 0.827, + 0.594 + ], + "angle": 0, + "content": "Figure 84: Color Proportion case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.612, + 0.618, + 0.724, + 0.634 + ], + "angle": 0, + "content": "Color Counting" + }, + { + "type": "image", + "bbox": [ + 0.538, + 0.638, + 0.618, + 0.7 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.644, + 0.793, + 0.666 + ], + "angle": 0, + "content": "How many colors are there in this image?" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.669, + 0.71, + 0.678 + ], + "angle": 0, + "content": "A:10 B:11" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.682, + 0.711, + 0.691 + ], + "angle": 0, + "content": "C:12 D:13" + }, + { + "type": "text", + "bbox": [ + 0.781, + 0.682, + 0.814, + 0.691 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.597, + 0.703, + 0.737, + 0.712 + ], + "angle": 0, + "content": "81.25% (26/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.725, + 0.825, + 0.754 + ], + "angle": 0, + "content": "Figure 86: Color Counting case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.618, + 0.779, + 0.716, + 0.792 + ], + "angle": 0, + "content": "Color Illusion" + }, + { + "type": "image", + "bbox": [ + 0.596, + 0.803, + 0.64, + 0.822 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.516, + 0.823, + 0.816, + 0.846 + ], + "angle": 0, + "content": "Which circles has the darkest color? The circles are numbered left to right starting from 1." + }, + { + "type": "text", + "bbox": [ + 0.517, + 0.847, + 0.659, + 0.857 + ], + "angle": 0, + "content": "A: All the same B: 1 C: 2 D: 3" + }, + { + "type": "text", + "bbox": [ + 0.782, + 0.849, + 0.815, + 0.857 + ], + "angle": 0, + "content": "Ans: A" + }, + { + "type": "text", + "bbox": [ + 0.597, + 0.863, + 0.737, + 0.873 + ], + "angle": 0, + "content": "84.38% (27/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.885, + 0.826, + 0.915 + ], + "angle": 0, + "content": "Figure 88: Color Illusion case that majority of VLMs provide incorrect results." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "39" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.277, + 0.109, + 0.383, + 0.124 + ], + "angle": 0, + "content": "Color Mimicry" + }, + { + "type": "image", + "bbox": [ + 0.189, + 0.129, + 0.296, + 0.191 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.135, + 0.456, + 0.146 + ], + "angle": 0, + "content": "How many leaves in this images?" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.16, + 0.365, + 0.17 + ], + "angle": 0, + "content": "A:1 B:2" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.172, + 0.366, + 0.182 + ], + "angle": 0, + "content": "C:3 D:0" + }, + { + "type": "text", + "bbox": [ + 0.445, + 0.173, + 0.478, + 0.182 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.261, + 0.194, + 0.402, + 0.203 + ], + "angle": 0, + "content": "93.75% (30/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.216, + 0.489, + 0.245 + ], + "angle": 0, + "content": "Figure 89: Color Mimicry case that majority of VLMs provide incorrect results." + }, + { + "type": "title", + "bbox": [ + 0.269, + 0.27, + 0.393, + 0.284 + ], + "angle": 0, + "content": "Color Robustness" + }, + { + "type": "image", + "bbox": [ + 0.185, + 0.289, + 0.291, + 0.351 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.296, + 0.472, + 0.306 + ], + "angle": 0, + "content": "How many oranges are in the image?" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.321, + 0.365, + 0.33 + ], + "angle": 0, + "content": "A:3 B:2" + }, + { + "type": "text", + "bbox": [ + 0.308, + 0.333, + 0.365, + 0.342 + ], + "angle": 0, + "content": "C:0 D:1" + }, + { + "type": "text", + "bbox": [ + 0.442, + 0.334, + 0.475, + 0.343 + ], + "angle": 0, + "content": "Ans: D" + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.354, + 0.421, + 0.364 + ], + "angle": 0, + "content": "87.5% (28/32) Model Predictions Changed" + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.368, + 0.28, + 0.431 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.28, + 0.368, + 0.382, + 0.431 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.382, + 0.368, + 0.485, + 0.431 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.431, + 0.28, + 0.489 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.28, + 0.431, + 0.382, + 0.489 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.382, + 0.431, + 0.485, + 0.489 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.489, + 0.28, + 0.548 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.28, + 0.489, + 0.382, + 0.548 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.382, + 0.489, + 0.485, + 0.548 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.556, + 0.49, + 0.599 + ], + "angle": 0, + "content": "Figure 91: Color Robustness case that majority of VLMs change the answers over color variations in images." + }, + { + "type": "title", + "bbox": [ + 0.611, + 0.11, + 0.724, + 0.123 + ], + "angle": 0, + "content": "Color Blindness" + }, + { + "type": "image", + "bbox": [ + 0.535, + 0.13, + 0.611, + 0.188 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.135, + 0.799, + 0.145 + ], + "angle": 0, + "content": "What is the number in the center of" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.147, + 0.699, + 0.157 + ], + "angle": 0, + "content": "this image?" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.16, + 0.666, + 0.169 + ], + "angle": 0, + "content": "A:2" + }, + { + "type": "text", + "bbox": [ + 0.645, + 0.172, + 0.676, + 0.182 + ], + "angle": 0, + "content": "C:22" + }, + { + "type": "text", + "bbox": [ + 0.687, + 0.173, + 0.711, + 0.182 + ], + "angle": 0, + "content": "D:26" + }, + { + "type": "text", + "bbox": [ + 0.781, + 0.173, + 0.813, + 0.182 + ], + "angle": 0, + "content": "Ans: C" + }, + { + "type": "text", + "bbox": [ + 0.597, + 0.194, + 0.737, + 0.203 + ], + "angle": 0, + "content": "87.50% (28/32) Models Incorrect" + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.216, + 0.825, + 0.245 + ], + "angle": 0, + "content": "Figure 90: Color Blindness case that majority of VLMs provide incorrect results." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "40" + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_origin.pdf b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5047547c3b05288441efc44f5afa029d63b051f1 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/3e20df2e-9239-4987-81d7-686c92a800c4_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:838e722106f3f70f492d6c45d140f29dcee0263fb1e4c40a0faf4314e78049ec +size 22902221 diff --git a/data/2025/2504_10xxx/2504.10514/full.md b/data/2025/2504_10xxx/2504.10514/full.md new file mode 100644 index 0000000000000000000000000000000000000000..39e3b6e9d7e26fe37e30141f6c81b2c3852d85a7 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/full.md @@ -0,0 +1,1815 @@ +# COLORBENCH: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness + +Yijun Liang\*, Ming Li\*, Chenrui Fan, Ziyue Li, Dang Nguyen, Kwesi Cobbina Shweta Bhardwaj, Jiuhai Chen, Fuxiao Liu, Tianyi Zhou + +University of Maryland, College Park + +{yliang17,minglii,tianyi}@umd.edu + +Project: https://github.com/tianyi-lab/ColorBench + +# Abstract + +Color plays an important role in human perception and usually provides critical clues in visual reasoning. However, it is unclear whether and how vision-language models (VLMs) can perceive, understand, and leverage color as humans. This paper introduces "COLORBENCH", an innovative benchmark meticulously crafted to assess the capabilities of VLMs in color understanding, including color perception, reasoning, and robustness. By curating a suite of diverse test scenarios, with grounding in real applications, COLORBENCH evaluates how these models perceive colors, infer meanings from color-based cues, and maintain consistent performance under varying color transformations. Through an extensive evaluation of 32 VLMs with varying language models and vision encoders, our paper reveals some undiscovered findings: (i) The scaling law (larger models are better) still holds on COLORBENCH, while the language model plays a more important role than the vision encoder. (ii) However, the performance gaps across models are relatively small, indicating that color understanding has been largely neglected by existing VLMs. (iii) CoT reasoning improves color understanding accuracies and robustness, though they are vision-centric tasks. (iv) Color clues are indeed leveraged by VLMs on COLORBENCH but they can also mislead models in some tasks. These findings highlight the critical limitations of current VLMs and underscore the need to enhance color comprehension. Our COLORBENCH can serve as a foundational tool for advancing the study of human-level color understanding of multimodal AI. + +# 1 Introduction + +Color is widely recognized as a fundamental component of human visual perception [11, 34], playing a critical role and providing critical clues in object detection, scene interpretation, contextual understanding, planning, etc., across critical application scenarios such as scientific discovery, medical care, remote sensing, shopping, visualization, artwork interpretation, etc. For instance, [19] leverages spectral color signatures to distinguish vegetation, health, and water bodies in satellite imagery, and [1] utilizes sediment color patterns to detect marine ecosystems. These applications underscore how color-driven features play an important role in real-world scenarios. Moreover, colors can convey affective or semantic information beyond simply recognizing and naming colors since colors are highly correlated to other attributes or concepts and thus can provide key information to various downstream tasks that do not even directly ask about colors [18, 37, 45]. As modern vision-language models (VLMs) [12, 41, 48] continue to be deployed to increasingly diverse scenarios, color—an essential visual feature—plays a growing role in the processes of understanding and reasoning. It is essential to examine whether and how these models can understand and leverage color information + +![](images/8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg) +Figure 1: Test samples from COLORBENCH. COLORBENCH evaluates VLMs across three core capabilities: Perception, Reasoning and Robustness. The benchmark comprises 11 tasks designed to assess fine-grained color understanding abilities and the effect of color on other reasoning skills, including counting, proportion calculation, and robustness estimation. With over 1,400 instances, COLORBENCH covers a wide range of real-world application scenarios, including painting analysis, test kit readings, shopping, satellite/wildlife image analysis, etc. + +![](images/62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg) + +![](images/afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg) + +as in human perception and reasoning, how color influences their overall perceptual and reasoning capabilities, and whether they can interpret visual illusions, resolve ambiguous cues, and maintain reliable performance under color variations. + +However, existing benchmarks for VLMs mainly focus on tasks that may not heavily depend on color understanding or require color-centric reasoning, thereby overlooking nuanced color-related factors [25, 29]. Hence, there is a lack of benchmarks that systematically assess how well VLMs understand color when it serves as the main or distinguishing feature of a scene and key information to a task. Moreover, robustness to variations in color, such as recoloring and shifting hues, has also been largely neglected in the LLM era [6, 8, 20]. Consequently, it remains unclear whether VLMs can perceive and reason about color with human-like proficiency and to what extent their performance deteriorates under significant color perturbations. This shortfall underscores the need for a dedicated benchmark that comprehensively probes various facets of color comprehension in VLMs. A detailed discussion of related works is provided in Appendix A. + +To bridge this gap, we propose a novel benchmark, COLORBENCH, that aims at comprehensively evaluating VLMs on three core capabilities of color understanding: Color Perception, Color Reasoning, and Color Robustness. Color Perception examines VLMs' fundamental capability to correctly detect and interpret colors from inputs. Color Reasoning refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. Color Robustness assesses how consistently VLMs perform when an image's colors are altered, ensuring they maintain accurate predictions across different color variants of an image. Under these three core dimensions, 11 fine-grained tasks assessing different aspects of color understanding capabilities are formulated as shown in Figure 1, which not only shows test examples in COLORBENCH but also presents potential real-world applications. + +By focusing on these facets, COLORBENCH offers a granular view of VLMs' capabilities in color understanding, aiming to illuminate both their strengths and shortcomings. We evaluate 32 widely used VLMs in our benchmark, ranging from open-source to proprietary models, from relatively small models (0.5B) to larger models (78B), and obtain some unrevealed observations. + +Main Contribution. We introduce "COLORBENCH", the first dedicated benchmark for assessing the color perception, reasoning, and robustness of VLMs. We develop an evaluation suite for 11 color-centric tasks, covering diverse application scenarios and practical challenges. Moreover, we report a fine-grained empirical evaluation of 32 state-of-the-art VLMs, which exposes their limitations in color understanding and offers novel insights for future research. Our key findings are highlighted in the following: + +1. The scaling law still holds for color understanding but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs. +2. The absolute performances of different VLMs are relatively low, and the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs. +3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community. +4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLOR-BENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to give wrong answers, and converting colorful images into grayscale can improve the accuracy. + +# 2 COLORBENCH Construction + +We present COLORBENCH, the first benchmark explicitly designed to comprehensively evaluate the color understanding capabilities of VLMs across three key dimensions: Color Perception, Color Reasoning, and Color Robustness. This benchmark consists of 1,448 instances and 5,814 image-text questions spanning 11 diverse tasks. For the Color Perception and Color Reasoning categories, each instance contains an image, a question, and multiple-choice (3 to 6) options, with only one correct answer. For Color Robustness, each instance consists of 10 multiple-choice image-text questions, including a seed image and 9 edited images with color changes. Given that color is a fundamental visual feature influencing most vision-related tasks, disentangling color under + +![](images/a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg) +Figure 2: Statistics of 3 categories and 11 tasks in COLORBENCH. + +standing from other general capabilities (e.g., object recognition, counting) is challenging. To address this, we design questions with explicit color constraints for Color Perception and Reasoning dimensions, enabling a focused evaluation of VLMs' perception and reasoning abilities in relation to color. + +# 2.1 Taxonomy + +Motivated by the existing evaluation criteria from prior benchmarks and real-world application scenarios, we categorize the color understanding capability into 3 core dimensions and 11 detailed axes, as shown in Figure 1. The detailed question templates and sample cases are shown in Appendix D. + +# 2.1.1 Color Perception + +This core dimension refers to the fundamental capability to correctly detect and interpret colors from inputs. We assess this capability through 3 key aspects: i) Color Recognition, ii) Color Extraction, and iii) Object Recognition. + +Color Recognition includes questions that either ask for the color of a given object or determine whether a specific color is present in the image. Color Extraction requires the model to extract the value of color code (e.g., RGB, HSV, or HEX) for a given single color image. This task measures the ability to perform fine-grained color retrieval from visual input. Object Recognition evaluates the + +model's capability to identify objects that match a specified color described in the text input. These two tasks require VLMs to be able to detect and interpret the color in either the image or text input. + +# 2.1.2 Color Reasoning + +This dimension refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. This category encapsulates 7 key aspects: i) Color Proportion, ii) Color Comparison, iii) Color Counting, iv) Object Counting, v) Color Illusion, vii) Color Mimicry and viii) Color Blindness. + +Color Proportion tests the model's capability to estimate the relative area occupied by a specific color. Questions in this task require both color perception and proportion calculation capabilities. Color Comparison requires the model to be able to distinguish among multiple colors in the image, assessing its sensitivity to hue, saturation, and brightness differences in visual input. Color Counting focuses on identifying the number of unique colors in the image, evaluating the model's perception and differentiation of distinct color variations, and counting ability. Object Counting extends this challenge by requiring the model to count objects that match a specific color pattern. This task requires an integration of object recognition and color perception. Color Illusion questions query VLMs to compare colors in potential illusionary environments. This task evaluates the model's ability to account for color-induced optical illusions. Color Mimicry challenges the model to detect objects camouflaged within their surroundings, where color serves as a misleading factor, requiring advanced pattern recognition and contextual reasoning. These two tasks both assess the model's ability to make correct predictions under the misleading of color-related information in visual input. Color Blindness, inspired by Ishihara tests, assesses the model's ability to recognize numbers or text embedded in color patterns, testing its understanding of shape-color relationships. These 7 tasks comprehensively assess the model's capacity for logical reasoning, spatial awareness, and adaptive interpretation of color-based visual cues. + +# 2.1.3 Color Robustness + +Color Robustness assesses how consistently VLMs perform and whether they can consistently deliver accurate predictions under color variants of a given image. It involves measuring the stability of a VLM's responses when confronted with the same text input and a series of recolored images. To ensure that color does not influence the predictions, we select questions and corresponding answers that are independent of color attributes. Under these conditions, a robust model should produce unchanged predictions regardless of recoloring manipulation. Any variation in the model's responses is then used to quantify its susceptibility to color changes, providing a direct measure of robustness. + +# 2.2 Data Curation + +For most of the tasks in the category of Color Perception and Color Reasoning, we rely on human experts to manually collect images from multiple online benchmarks and websites. For the Color Proportion task, to ensure the correctness of the ground truth, an extra color extrac + +tion tool is firstly utilized to obtain the color histogram of the image. Questions and options are then manually designed based on these color statistics. For tasks including Color Extraction, Color Blindness, and Color Illusion, testing images are generated by corresponding code programs to ensure the controllability of the questions and answers. The detailed data sources are shown in Appendix B. + +After the initial data is collected, additional filtering processes are conducted in a human-machine interactive process. We first conduct inference on a variety of VLMs and discard low-quality samples + +![](images/fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg) +Figure 3: Generation Pipeline for Color Robustness. For each seed image, we apply 3 recoloring strategies (Entire Image, Target Segment, Largest Segment) to generate edited images. For each strategy, we change the color of the recoloring region via shifting the Hue values by $90^{\circ}$ , $180^{\circ}$ , or $270^{\circ}$ in HSV color space. + +based on the GPT-4o prediction result and human evaluation. For synthesized data, similar processes are conducted, but with additional code (for generation) and image assessment. The above process is conducted in three rounds before the final benchmark instances are settled. This refinement process ensures COLORBENCH a rigorous and informative benchmark for assessing color-related understanding. + +For Color Robustness, we create evaluation instances by modifying images or specific regions through color changes. We define 3 recoloring strategies to determine the recoloring region: i) Entire Image, where the whole image is recolored; ii) Target Segment, where only the segment relevant to the question is altered; and iii) Largest Segment, where the largest region unrelated to the question is modified. Further details can be found in Appendix C. While generating color variants, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region and perform recoloring by shifting the Hue value in the HSV color space at three levels to cover entire color wheel: $(90^{\circ}, 180^{\circ},$ and $270^{\circ})$ . This process produces 9 variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation. + +# 2.3 Evaluation Metrics + +For Perception and Reasoning, we use accuracy as the evaluation metric, as all tasks follow a multiple-choice format. Accuracy is computed per task and per category, representing the proportion of correctly answered questions. + +For Robustness, we evaluate a model's ability to maintain consistent accurate predictions under color variations. As detailed in Section 2.2, each seed image $I_{s}$ is transformed into $n$ recolored variants using recoloring strategies, while keeping the original question $q$ unchanged. A model $\mathcal{M}$ is considered robust on a seed image $I_{s}$ and corresponding question $q$ if and only if it provides a correct prediction for $I_{s}$ and maintains correct on all $n$ recolored versions. To quantify robustness, we define the instance-level robustness metric $R(I_s,q)\in \{0,1\}$ and a model-level robustness metric $Robust_{\mathcal{M}}\in [0,1]$ . + +Instance-level Robustness. Let the recolored images be $I_1, \dots, I_n$ and the generation output of model for image $I_i$ and question $q$ is $\mathcal{M}(I_i, q)$ . Define $c(\mathcal{M}(I_i, q))$ as the model correctness: $c(\mathcal{M}(I_i, q)) = 1$ if model result $\mathcal{M}(I_i, q)$ is correct, otherwise 0. The instance-level robustness metric $R(I_s, q)$ for a seed image $I_s$ and question $q$ is defined as: + +$$ +R \left(I _ {s}, q\right) = \left\{ \begin{array}{l l} 1 & \text {i f} c \left(\mathcal {M} \left(I _ {i}, q\right)\right) = c \left(\mathcal {M} \left(I _ {s}, q\right)\right) = 1, \forall i \in [ n ] \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {1} +$$ + +Overall Robustness. Let $S$ be the set of seed images. We define model robustness to be: + +$$ +\operatorname {R o b u s t} _ {\mathcal {M}} = \frac {\sum_ {I _ {s} \in \mathcal {S}} R \left(I _ {s}\right)}{| \mathcal {S} |}, \operatorname {R o b u s t} _ {\mathcal {M}} \in [ 0, 1 ] \tag {2} +$$ + +Robust $_{\mathcal{M}}$ represents the proportion of seed images on which the model maintains correctness across all color variations. A model is more robust when Robust $_{\mathcal{M}}$ is higher. + +# 3 Experimental Results + +# 3.1 Main Results + +Table 1 presents the performances of a wide range of VLMs, along with human evaluation results on our COLORBENCH. Human participants achieve the highest performance on all evaluated tasks across all models. Among the models, overall accuracy generally increases with model size, with larger models tend to outperform smaller models, and the two proprietary models, GPT-4o and Gemini-2-flash, perform the best $^2$ . + +Color Perception. In Color Recognition (C'Recog), most models perform well (above $60\%$ ), indicating that this task is relatively basic for color perception. Gemini-2 with CoT obtains the + +Table 1: Performance of 32 VLMs (grouped by size) and human performance on COLORBENCH. Models are ranked within each group according to their overall performance on Color Perception and Reasoning (P & R Overall) tasks. For human evaluation, Color Extraction task is excluded, as humans are not attuned to precise color code differences. The best performance in each VLM group is highlighted in bold. For human evaluation, any instance surpassing all VLMs is marked in bold. + +
Color PerceptionColor ReasoningP & RRobustness
C*RecogC*ExtractO*RecogC*PropC*CompC*CountO*CountC*IlluC*MimicC*BlindOverallC*Robust
VLMs: < 7B
LLaVA-OV-0.5B26.344.846.830.023.822.621.438.758.626.832.638.7
InternVL2-1B35.534.459.723.841.619.622.334.438.633.133.639.4
InternVL2-2B60.536.566.240.038.619.629.126.952.921.036.454.2
InternVL2.5-1B55.336.561.042.545.522.625.243.041.428.038.352.3
InternVL2.5-2B69.728.171.433.848.525.530.132.355.719.838.559.8
Qwen2.5-VL-3B72.438.574.043.848.522.625.243.045.724.241.163.7
Cambrian-3B67.131.366.247.550.525.529.144.161.422.341.559.0
VLMs: 7B - 8B
LLaVA-Next-v-7B29.038.557.121.334.723.525.238.741.417.831.252.1
LLaVA-Next-m-7B21.118.863.627.542.616.734.041.947.129.933.455.2
Eagle-X5-7B52.647.967.541.342.620.635.044.148.622.940.048.5
Cambrian-8B72.428.172.748.854.531.433.041.957.117.242.364.9
InternVL2-8B72.450.077.942.548.520.635.938.750.023.643.165.5
Eagle-X4-8B71.147.968.845.050.526.537.940.948.627.444.163.7
LLAVA-OV-7B71.153.181.852.553.519.626.248.448.623.644.774.0
InternVL2.5-8B77.647.983.150.062.425.533.034.452.919.845.269.8
Qwen2.5-VL-7B76.349.084.447.552.519.634.044.155.728.746.274.4
VLMs: 10B - 30B
LLaVA-Next-13B56.631.371.427.541.627.528.229.045.725.536.453.3
Cambrian-13B67.134.474.046.347.532.435.038.755.724.842.864.7
Eagle-X4-13B73.743.876.643.847.523.538.834.457.126.143.766.3
InternVL2-26B72.452.187.052.556.420.635.034.455.727.446.374.0
InternVL2.5-26B72.445.889.645.063.422.635.032.362.929.346.883.0
VLMs: 30B - 70B
Eagle-X5-34B79.027.180.548.848.523.535.937.660.025.543.467.1
Cambrian-34b75.057.377.950.046.522.632.037.664.324.245.367.7
InternVL2-40B72.452.183.151.361.419.635.934.458.621.045.678.7
LLAVA-Next-34b69.746.976.643.856.428.441.836.661.429.946.665.9
InternVL2.5-38B71.160.489.653.863.429.440.834.461.426.850.084.6
VLMs: > 70B
InternVL2-76B72.442.785.745.062.427.535.031.250.023.644.668.6
LLAVA-Next-72B72.454.279.241.349.524.535.933.348.634.445.266.5
InternVL2.5-78B75.058.381.843.868.327.536.934.461.428.748.886.2
LLAVA-OV-72B73.763.583.152.569.327.550.536.655.731.951.980.3
VLMs: Proprietary
GPT-4o76.340.680.538.366.330.429.150.570.058.652.946.2
Gemini-2-flash80.352.187.046.970.333.334.944.172.949.655.470.7
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.469.9
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.673.6
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0--
+ +highest performance. In Color Extraction (C'Extra), to our surprise, the two powerful proprietary models without CoT prompting only reach the middle-tier performances, indicating the potential limitation on the color perception of their vision encoders. Similar to the Color Existence task, almost all the models perform well in Object Recognition (O'Recog), and the 2 proprietary models do not reach the top. This is probably due to the strong alignment between this task and the common training recipe, which includes abundant general object detection images. + +Color Reasoning. In Color Proportion (C'Prop), even the best model, Gemini-2 with CoT, can only reach $58.0\%$ of the accuracy, which is almost only slightly better than random guessing, showcasing the supreme difficulty of this task. In Color Comparison (C'Comp), larger models perform better in this task, and the proprietary models with CoT reach the top performance unsurprisingly. Surprisingly, in Color Counting (C'Count), all models show extremely poor performances. The highest performance comes from Gemini-2 with CoT, exceeding the second place by 10 percent, although its performance is also unsatisfactory at only $43.1\%$ . In Object Counting (O'Count), surpassing the 2 proprietary models, LLaVA-OV-72B reaches the top and becomes the only model that exceeds $50\%$ of the accuracy. Similar to the findings from the Object Recognition task, this might be caused by the extremely adequate object detection tasks in open-sourced training recipes. In Color Illusion (C'Ilu), the accuracies of most models lie in the range of $30\%$ to $50\%$ , and GPT-4o without CoT is the only one that exceeds $50\%$ of the accuracy. In Color Mimicry (C'Mimic), the 2 proprietary models reach the top, while more reasoning steps do not benefit a lot. In Color Blindness (C'Blind), most of the open-sourced models present accuracies under $30\%$ . Considering the extremely practical usage of this scenario, we think the current community should pay more attention to this. Moreover, we also observe that, surprisingly, more reasoning steps benefit VLMs in the color blindness test, although it seems like a pure color perception task. + +Table 2: Spearman's rank correlation between VLM performance and different model parts' sizes on each task. L denotes the language model part's size and V represents the vision encoder part's size. We use “(*)” to mark correlations with p-values $\leq 0.05$ . It shows that the scaling law still holds for color understanding but it is much weaker. + +
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'I'lluC'MimicC'BlindOverallC'Robust
L+V0.5657 (*)0.5255 (*)0.7107 (*)0.5125 (*)0.6358 (*)0.4316 (*)0.7566 (*)-0.34600.4832 (*)0.24600.7619 (*)0.7386 (*)
L0.5724 (*)0.4937 (*)0.6769 (*)0.4696 (*)0.6118 (*)0.4408 (*)0.7611 (*)-0.3697 (*)0.4559 (*)0.28240.7436 (*)0.7123 (*)
V0.3955 (*)0.28560.5465 (*)0.6242 (*)0.5295 (*)0.20890.3608-0.01270.6024 (*)-0.06790.5271 (*)0.5623 (*)
+ +Color Robustness. In Color Robustness (C'Robust), a higher value represents better robustness towards color alteration. The only 4 models that exceed $80\%$ are LLaVA-OV-72B, InternVL2.5-26B, InternVL2.5-38B, and InternVL2.5-78B, which utilize relatively larger vision encoders, InternViT-6B, compared with others (mostly only 300-400M). In the meantime, GPT-4o has a really low robustness $(46.2\%)$ to colors, indicating its vulnerable sensitivity to color changes, while Gemini-2 shows promising robustness $(70.7\%)$ towards colors. Moreover, another surprising observation is that even though only the colors are changed and all the original queries are kept, utilizing more reasoning steps can consistently improve robustness for GPT-4o $(+23.7\%)$ and Gemini-2 $(+2.9\%)$ . + +# 3.2 Further Findings + +Since color-related tasks often involve abstract reasoning, language comprehension, and contextual interpretation, it is essential to assess not just the vision encoder but also part of the language model, which plays a critical role in processing and understanding such tasks. To quantitatively analyze the correlation between VLM performances on color understanding tasks and their sizes, Spearman's rank correlation is calculated between VLM performances and (i) overall model sizes $(\mathbf{L} + \mathbf{V})$ , (ii) language model sizes $(\mathbf{L})$ , and (iii) vision encoder sizes $(\mathbf{V})$ . The correlation values and p-signs are presented in Table 2; a star is notated when the p-value of the correlation is lower than 0.05. It is observed that between the performances and language model + +![](images/9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg) +Finding 1. The scaling law still holds for color understanding, but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs. +Figure 4: The heatmaps related to performances and VLM sizes. Deeper color represents higher performance of P&R Overall Accuracy or Robustness. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance. + +sizes, most of the tasks have a correlation greater than 0.5 and a p-value smaller than 0.05, except for Color Illusion and Color Blindness due to their special characteristics. Since the correlation between overall model sizes $(\mathbf{L} + \mathbf{V})$ and P&R Overall (0.7619), and Robustness (0.7390), we conclude that the color understanding, including Color Perception, Color Reasoning, and Color Robustness, still follows the scaling law of model sizes. Figure 4 presents the correlations between performances and model sizes in each model family. This visualization clearly shows the correlation between performances and model sizes; a larger model leads to higher performance within each model family. + +However, between the performances and vision encoder sizes, most of the tasks either have a correlation lower than 0.5 or a p-value greater than 0.05, which is not sufficient to conclude with the evident positive correlation. Despite these findings, we try to avoid conveying the message that there is no positive correlation between performances and vision encoder sizes. We think it is because of the negligence of the current community to focus on the scaling laws of vision encoders. The vision encoders used in the current mainstream VLMs are constrained in a very small set: (i) most of the VLMs only use one type of vision encoders for the whole family, except for the InternVL2 and InternVL2.5 series; (ii) most of the VLMs use the vision encoder with the size of $300 - 400\mathrm{M}$ . These challenges make it hard to evaluate the scaling laws of vision encoders. Further visualizations are presented in Appendix L.2. + +Table 4: Adding reasoning steps can improve VLMs' performance on COLORBENCH. The change of accuracy brought by Chain of Thought (CoT) prompting on all tasks for GPT-4o and Gemini-2-flash. The last row presents the average improvement across both models. + +
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverallC'Robust
GPT-4o Δ+1.3+14.6+2.6+6.1+5.0-3.9+3.9-6.4+7.1+8.2+4.5+23.7
Gemini-2 Δ+2.6+4.1+1.3+11.1-2.0+9.8+3.9-3.2+2.8+10.4+4.2+2.9
Average Δ+1.95+9.35+1.95+8.60+1.50+2.95+3.9-4.80+4.95+9.30+4.35+13.30
+ +As shown in Table 3, we separate all the VLMs into several groups based on their sizes and present the best accuracy and the model name within each group. We can see that even the powerful proprietary models, GPT-4o and Gemini-2, can only reach an overall color perception and reasoning (P & R Overall) accuracy of $53.9\%$ , only $+2.0\%$ better than the best open-sourced model. Task-level results in Table 1 further reveal that these advanced proprietary models still exhibit substantial performance gaps compared to humans across most tasks. Moreover, the best model from group 1 has the accuracy of $41.5\%$ (Cambrian-3B), which is only $10.4\%$ lower than the best open-sourced + +Table 3: The best model within each group and its performances (on P&R accuracy and Robustness). The absolute performances of different VLMs on COLORBENCH are relatively low, and the performance gaps between models are not large. + +Finding 2. The absolute performances of different VLMs are relatively low and lag behind those of humans. Moreover, the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs. + +
Color P & R OverallColor Robustness
Model SizeModelBestModelBest
<7BCambrian-3B41.5Qwen2.5-VL-3B63.7
7B-8BQwen2.5-VL-7B46.2Qwen2.5-VL-7B74.4
10B-30BInternVL2.5-26B46.8InternVL2.5-26B83.0
30B-50BInternVL2.5-38B50.0InternVL2.5-38B84.6
>70BLLava-OV-72B51.9InternVL2.5-78B86.2
ProprietaryGemini-255.4Gemini-270.7
ProprietaryGemini-2 (CoT)59.6Gemini-2 (CoT)73.6
+ +model. As for the robustness, the powerful proprietary models even show weaker robustness than the 7B model. Considering the lack of existing benchmarks specifically evaluating VLMs' color understanding capabilities, we conclude that this area is long-neglected by the community, and the open-sourced community is still on the same page with the proprietary model providers. + +Finding 3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community. + +The impact of using CoT prompting is shown in Table 4, in which we can see CoT improves the average P&R Overall accuracy across both models by $+4.35\%$ , indicating that reasoning benefits these color-related tasks. Within the category of Color Perception, the improvements from CoT on Color Recognition and Object Recognition are quite limited as these tasks heavily rely on the vision encoder. Figure 59 and 60 in Appendix M illustrate that adding reasoning steps does not take effect since the initial visual perception and color identification are incorrect in the slow thinking process. However, to our surprise, we find that the Color Extraction task benefits extremely from more reasoning steps, although it seems only related to the vision encoder. After a thorough investigation, we observe that most of the current VLMs are not capable of directly extracting color values, so they need to use more reasoning steps to reach reasonable answers. + +Within the category of Color Reasoning, CoT benefits most of the tasks. However, in the Color Illusion task, CoT harms the model performance. After a manual investigation, we observe that more reasoning steps might cause VLMs to focus more on the misleading environments rather than directly compare the assigned colors, as shown in Figure 61. Another observation occurs in the Color Blindness task. Unlike other reasoning-related tasks, humans can read a color blindness test image with a simple glimpse without any slow thinking. This fascinating misalignment between humans and VLMs intrigues us to further investigation. We find that VLMs recognize these digits in a button-up pattern: they need to first infer that the dots in the image can form a digit before they really recognize these dots as digits. + +In addition, the consistent improvement of CoT on Color Robustness is also an unrevealed phenomenon. In our setting, only the colors of the image are altered, and the questions are strictly the + +same as the original. Thus, under this circumstance, color is the only variant, which is supposed to be more related to the capability of the vision encoder. However, counterintuitively, as shown in our experiments, more reasoning steps make the VLMs more robust to the color changes, which is probably caused by the higher confidence of correct answers after reasoning. + +In order to examine whether VLMs really leverage color clues to handle tasks in COLORBENCH, experiments are conducted by converting all the original colorful images in the Color Perception and Reasoning categories into gray-scale ones, without changing the questions. Under this circumstance, the accuracies are expected to decrease dramatically as all our questions are related to colors. For quantitative analysis, we calculate the accuracy changing ratio as $(Acc_{ori} - Acc_{gray}) / Acc_{ori}$ for each VLM on each task. This value directly represents how the original accuracy changes with a gray-scale transformation. The positive value represents that the VLM has a higher accuracy on the original colored images, indicating that it needs color clues to solve the task. Higher positive values represent higher significance of the color clues. On the contrary, if the value is negative, it means that the VLM can reach a better accuracy after the gray-scale transformation, indicating that it does not need + +![](images/7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg) +Finding 4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLORBENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to wrong answers, and converting colorful images to grayscale can improve the accuracy. +Figure 5: The percentage of change in accuracy (y-axis) by converting colorful images to grayscale in each COLORBENCH task (x-axis). Each violin plot visualizes the distribution over all VLMs. Higher (lower) percentage indicates that VLMs rely more (less) on color clues for the task. Positive (negative) percentage indicates degradation (improvement) on grayscale images. Color clues are indeed more or less leveraged by VLMs in most tasks but they might mislead VLMs (illusion & mimicry). + +color clues for the task, and colors might even mislead VLM's judgment. Lower negative values represent the severe harm the color can have on the task. + +The accuracy changing ratio distributions across all VLMs and tasks are presented in Figure 5 as the violin plot. As shown in the figure, for most of the tasks, the ratios of VLMs are above 0, indicating that VLMs indeed leverage color clues to correctly solve the tasks; removing the color directly harms the original accuracies dramatically. However, when it comes to Color Illusion and Color Mimicry, the majority of the changing ratios are below 0, which means that VLMs can get better accuracies when all the color information is removed. This phenomenon is reasonable as the colors on both of these two tasks are more likely serving as the misleading factors. In the meantime, for the Color Counting and Color Blindness tasks, almost half the accuracies increase and half decrease, indicating that the color clues might not be so significant in this task, thus, some of the models can find other ways to get the answer. We also investigate the correlation between accuracy changing ratios and model sizes, while no significant correlation can be concluded. + +# 4 Conclusion, Limitation, and Future Works + +In this paper, we introduce COLORBENCH, the first benchmark designed to comprehensively evaluate the color understanding capabilities of VLMs, including Perception, Reasoning, and Robustness. After evaluating 32 widely used VLMs on our benchmark, several undiscovered observations are revealed by us. These observations emphasize the need for more sophisticated model architectures that integrate deeper color reasoning capabilities. To ensure high-quality and reliable annotations, COLORBENCH relies on manual data collection, annotation, and assessment across most domains. While this guarantees consistency, it inevitably limits dataset scale, style diversity, and category coverage. As future work, we aim to develop a trustworthy automated data collection pipeline and expand COLORBENCH to larger-scale, more diverse tasks involving complex interplays of color with texture, shape, and spatial relationships. Furthermore, investigating the impact of different visual encoders and language models could further elucidate the pathways through which VLMs process color information. + +# References + +[1] Basit Alawode, Iyyakutti Iyappan Ganapathi, Sajid Javed, Naoufel Werghi, Mohammed Bennamoun, and Arif Mahmood. Aquaticclip: A vision-language foundation model for underwater scene analysis. arXiv preprint arXiv:2502.01785, 2025. +[2] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025. +[3] Jirayu Burapacheep, Ishan Gaur, Agam Bhatia, and Tristan Thrush. Colorswap: A color and word order dataset for multimodal evaluation. arXiv preprint arXiv:2402.04492, 2024. +[4] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330, 2024. +[5] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24185–24198, 2024. +[6] Kanjar De and Marius Pedersen. Impact of colour on robustness of deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 21-30, 2021. +[7] Google DeepMind. Gemini 2.0 flash, 2025. +[8] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4829-4837, 2016. +[9] Hao Fei, Yuan Yao, Zhuosheng Zhang, Fuxiao Liu, Ao Zhang, and Tat-Seng Chua. From multimodal llm to human-level ai: Modality, instruction, reasoning, efficiency and beyond. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 1-8, 2024. +[10] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. +[11] Karl R. Gegenfurtner and Jochem Rieger. Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10(13):805-808, 2000. +[12] Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214, 2024. +[13] Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14375-14385, 2024. +[14] Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv preprint arXiv:2204.13653, 2022. +[15] Shuai He, Anlong Ming, Li Yaqi, Sun Jinyuan, Zheng ShunTian, and Ma Huadong. Thinking image color aesthetics assessment: Models, datasets and benchmarks. ICCV, 2023. +[16] Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, and Tae-Hyun Oh. Vlm's eye examination: Instruct and inspect visual competency of vision language models. arXiv preprint arXiv:2409.14759, 2024. +[17] Md Farhan Ishmam, Ishmam Tashdeed, Talukder Asir Saadat, Md Hamjajul Ashmafee, Abu Raihan Mostofa Kamal, and Md Azam Hossain. Visual robustness benchmark for visual question answering (vqa). arXiv preprint arXiv:2407.03386, 2024. +[18] Ali Jahanian, Shaiyan Keshvari, SVN Vishwanathan, and Jan P Allebach. Colors-messengers of concepts: Visual design mining for learning color semantics. ACM Transactions on Computer-Human Interaction (TOCHI), 24(1):1-39, 2017. + +[19] Johannes Jakubik, Benedikt Blumenstiel, and Clive Tinashe Marimo. Ms-clip: Multi-spectral vision language learning for earth observation. In American Geophysical Union Fall Meeting, 2024. +[20] Jayendra Kantipudi, Shiv Ram Dubey, and Soumendu Chakraborty. Color channel perturbation attacks for fooling convolutional neural networks and a defense against such attacks. IEEE Transactions on Artificial Intelligence, 1(2):181-191, 2020. +[21] Tony Lee, Haoqin Tu, Chi Heem Wong, Wenhao Zheng, Yiyang Zhou, Yifan Mai, Josselin Somerville Roberts, Michihiro Yasunaga, Huaxiu Yao, Cihang Xie, et al. Vhelm: A holistic evaluation of vision language models. arXiv preprint arXiv:2410.07112, 2024. +[22] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023. +[23] Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, and Deva Ramanan. Naturalbench: Evaluating vision-language models on natural adversarial samples. arXiv preprint arXiv:2410.14669, 2024. +[24] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer, 2024. +[25] Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, and Chengjie Wang. A survey on benchmarks of multimodal large language models, 2024. +[26] Ming Li, Chenguang Wang, Yijun Liang, Xiyao Wang, Yuhang Zhou, Xiyang Wu, Yuqing Zhang, Ruiyi Zhang, and Tianyi Zhou. Caughtcheating: Is your mllm a good cheating detective? exploring the boundary of visual perception and reasoning. arXiv preprint arXiv:2507.00045, 2025. +[27] Ming Li, Ruiyi Zhang, Jian Chen, Jiumiang Gu, Yufan Zhou, Franck Dernoncourt, Wanrong Zhu, Tianyi Zhou, and Tong Sun. Towards visual text grounding of multimodal large language model, 2025. +[28] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023. +[29] Zongxia Li, Xiyang Wu, Hongyang Du, Huy Nghiem, and Guangyao Shi. Benchmark evaluations, applications, and challenges of large vision language models: A survey. arXiv preprint arXiv:2501.02189, 2025. +[30] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll'ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014. +[31] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava- next: Improved reasoning,OCR,and world knowledge,2024. +[32] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision, pages 216-233. Springer, 2024. +[33] Lingjun Mao, Zineng Tang, and Alane Suhr. Evaluating model perception of color illusions in photorealistic scenes. arXiv preprint arXiv:2412.06184, 2024. +[34] Daniela Mapelli and Marlene Behrmann. The role of color in object recognition: Evidence from visual agnosia. Neurocase, 3(4):237-247, 1997. +[35] OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and etc. Gpt-4o system card, 2024. +[36] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015. +[37] Ragini Rathore, Zachary Leggon, Laurent Lessard, and Karen B Schloss. Estimating color-concept associations from image statistics. IEEE Transactions on Visualization and Computer Graphics, 26(1): 1226-1235, 2019. + +[38] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159, 2024. +[39] Ahnaf Mozib Samin, M Firoz Ahmed, and Md Mushtaq Shahriyar Rafee. Colorfoil: Investigating color blindness in large vision and language models. arXiv preprint arXiv:2405.11685, 2024. +[40] Haz Sameen Shahgir, Khondker Salman Sayeed, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yue Dong, and Rifat Shahriyar. Illusionvqa: A challenging optical illusion dataset for vision language models. arXiv preprint arXiv:2403.15952, 2024. +[41] Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, De-An Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, et al. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998, 2024. +[42] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. +[43] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024. +[44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. +[45] Hanna-Sophia Widhoelzl and Ece Takmaz. Decoding emotions in abstract art: Cognitive plausibility of clip in recognizing color-emotion associations. arXiv preprint arXiv:2405.06319, 2024. +[46] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023. +[47] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024. +[48] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. +[49] Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Mllms know where to look: Training-free perception of small visual details with multimodal llms. arXiv preprint arXiv:2502.17422, 2025. +[50] Le Zhang, Rabiul Awal, and Aishwarya Agrawal. Contrasting intra-modal and ranking cross-modal hard negatives to enhance visio-linguistic fine-grained understanding. arXiv preprint arXiv:2306.08832, 2023. +[51] Tiancheng Zhao, Tianqi Zhang, Mingwei Zhu, Haozhan Shen, Kyusong Lee, Xiaopeng Lu, and Jianwei Yin. VI-checklist: Evaluating pre-trained vision-language models with objects, attributes and relations. arXiv preprint arXiv:2207.00221, 2022. +[52] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017. +[53] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127:302-321, 2019. + +# Table of Contents for Appendix + +# A Related Works 14 + +A.1 VLM Benchmarks 14 +A.2 Color Evaluation 14 + +# B Data Sources 14 + +# C Detailed Generation Process for Robustness 15 + +# D COLORBENCH Categories and Questions 15 + +# E Implementation Details 19 + +# F Evaluation Prompts 19 + +# G Human Evaluation 19 + +# H Reasoning Models with Thinking Process 19 + +# I Qualitative Analysis of Failure Cases 20 + +# J Effect of Different Modalities 24 + +# K Fine-tuning Experiments on ColorBench 24 + +# L More Visualizations 25 + +L.1 VLM Size & Model Performance for Each Task 25 +L.2 Vision Size & Model Performance for Each Task 27 +L.3 Performance for Each Model Family on Each Task 28 + +# M Samples Cases 30 + +M.1 Effect of CoT 30 +M.2 Effect of Grayscale 35 +M.3 Failure with LLM and Vision 36 +M.4 Easy Cases 37 +M.5 Difficult Cases 39 + +# A Related Works + +# A.1 VLM Benchmarks + +With the rapid advancements in Vision-Language Models (VLMs) [9], numerous benchmarks have emerged to systematically evaluate VLM capabilities across diverse dimensions [29]. These benchmarks generally fall into two categories: text-centric and vision-centric evaluations, each designed to assess distinct multimodal competencies. Text-centric benchmarks primarily measure commonsense knowledge, reasoning, and complex problem-solving capabilities, exemplified by tasks in MMMU [47] and NaturalBench [23]. Conversely, vision-centric benchmarks focus on visual perception and reasoning (MMBench [32] and MME [10]), and robustness to visual perturbations (Grit [14] and Visual Robustness [17]). Furthermore, several benchmarks have extended their scope to evaluate specialized visual tasks, such as spatial relationship comprehension (SEED-Bench [22] and MM-Vet [46]), chart and map understanding (MMSTAR [4] and MuirBench [43]), visual grounding (Flickr30k [36] and TRIG [27]) and the detection and understanding of visual hallucinations (POPE [28] and HallusionBench [13]). However, despite the extensive scope covered by existing VLM benchmarks, none currently provide an integrated evaluation that simultaneously assesses visual perception, reasoning, and robustness within a unified framework. Moreover, although certain benchmarks [32, 10] have incorporated color-related questions, these have typically addressed basic color perception and recognition, neglecting deeper assessments of reasoning and robustness associated with color understanding. + +# A.2 Color Evaluation + +Color understanding is increasingly recognized as a crucial aspect of Vision-Language Models' ability to perceive and interpret visual content. Limited studies have explored how color information influences model performance on specific tasks. Some studies [51, 50] explore the understanding of color by replacing color-related words in textual inputs to evaluate the models' ability to handle color-specific information. More recent research [16, 21] focuses on assessing fine-grained color discrimination by asking models to distinguish subtle color differences in visual inputs. Samin et al. [39] introduced color-related foils to test VLMs' capacity to cognize basic colors like red, white, and green, particularly in contexts requiring attention to subtle cues. Additionally, Burapacheep et al. [3] developed a benchmark dataset to evaluate and enhance compositional color comprehension in VLMs, emphasizing tasks where understanding minimal color relationships is essential. IllusionVQA [40] evaluates model perception of color illusions in photorealistic scenes. While these works have addressed isolated aspects of color understanding, none have provided a holistic assessment framework. In contrast to these previous works, our study establishes the first comprehensive and specialized benchmark for evaluating the color-related abilities of VLMs, offering a quantitative, automated approach to further this area of research. + +# B Data Sources + +We conduct COLORBENCH from multiple sources, including website sources, publicly available benchmarks, and generated images. The detailed sources are included in Table 5. + +Table 5: Data sources for each task. + +
CategoryData Source
C'RecognitionWebsite, ICAA17K [15]
C'RecognitionWebsite, ICAA17K [15]
C'ExtractionSynthetic Data
C'ProportionWebsite, Synthetic Data
C'ComparisonWebsite
C'CountingWebsite, Synthetic Data
C'OuntingWebsite, ADA20K [52, 53], COCO2017 [30]
C'MimicryWebsite, IllusionVQA[40], RCID[33]
C'BlindnessSynthetic Data
C'RobustCV-Bench[42]
+ +Table 6: Recoloring strategies. + +
StrategyEditing RegionPurpose
Entire ImageWhole imageAssesses the model's robustness to global color shifts
Target SegmentSegment containing the object referenced in the questionEvaluates the model's sensitivity to task-relevant color changes
Largest SegmentThe largest segment that is irrelevant to the questionTests whether changes in dominant but unrelated regions affect model predictions
+ +# C Detailed Generation Process for Robustness + +For the Color Robustness, we evaluate the consistency of VLMs when faced with instances that differ only in the color of the visual input. To systematically assess this effect, we define 3 recoloring strategies that determine which part of the image is altered: i) Target Segment, ii) Largest Segment, and iii) Entire Image. As mentioned in Table 6, Target Segment strategy recolors only the segment containing the object referenced in the question. This strategy ensures that the modification directly affects the model's perception of task-relevant content. Largest Segment strategy alters the color of the largest segment that is irrelevant to the question, testing whether models are distracted by dominant but unrelated visual changes. In contrast, Entire Image strategy applies a global color shift to evaluate the model's sensitivity to overall color variations. As summarized in Table 6, the first two strategies introduce localized modifications, while the third assesses robustness to broader image-wide color changes. Importantly, only color attributes are altered without modifying object shapes or contextual elements, which preserves the overall realism of the image. By incorporating both task-relevant and irrelevant edits, our benchmark provides a comprehensive evaluation of VLMs' ability to handle color perturbations across different contexts. + +While generating color variations, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region. Once the editing region is determined, we modify the color of the corresponding region. In HSV color space, since Saturation and Value control the purity or brightness of the color, and only Hue controls the color of the part, we only adjust the Hue value in the HSV color space. Specifically, we shift the Hue by $90^{\circ}$ , $180^{\circ}$ , and $270^{\circ}$ . These three values ensure that the color manipulations cover significant perceptual differences across the color spectrum. This process produces nine variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation. Additionally, we select questions that are color-invariant, which means answers remain valid regardless of whether the recoloring appears fully natural. This design choice isolates color variation as the sole variable of interest and prevents confounding effects from semantic or contextual changes. Through these steps, we evaluate whether VLMs rely excessively on color information and whether they maintain consistency in their predictions despite substantial color shifts. + +# D COLORBENCH Categories and Questions + +Table 7 provides a detailed description of each task, alongside representative figures and sample questions that effectively demonstrate the specific capabilities being tested. Cases are provided for each task in Figure 6 to 16. + +Table 7: Task and question definition in COLORBENCH. + +
Task#Sample CaseDescriptionSample Questions
PerceptionColor Recognition76Figure 6Ask for the color of a specific object or determine if a particular color is present in the image.What is the color of object in this image? What color does not exist in this image?
Color Extraction96Figure 7Extract the color code value (e.g., RGB, HSV, or HEX) from a single color in the image.What is the HSV value of the given color in the image? What is the RGB value of the given color in the image?
Object Recognition77Figure 8Identify objects in the image that match a specified color noted in the text input.What object has a color of pink in this image?
ReasoningColor Proportion80Figure 9Estimate the relative area occupied by a specified color in the image.What is the dominant color in this image? What is the closest to the proportion of the red color in the image?
Color Comparison101Figure 10Distinguish among multiple colors present in the image to assess overall tones and shades.Which photo is warmer in overall color? Which object has a darker color in the image?
Color Counting102Figure 11Identify the number of unique colors present in the image.How many different colors are in this image?
Object Counting103Figure 12Count the number of objects of a specified color present in the image.How many objects with green color are in this image?
Color Illusion93Figure 13Assess and compare colors in potential illusionary settings within the image.Do two objects have the same color?
Color Mimicry70Figure 14Detect objects that are camouflaged within their surroundings, where color is a key deceptive element.How many animals are in this image?
Color Blindness157Figure 15Recognize numbers or text that are embedded in color patterns, often used in tests for color vision.What is the number in the center of the image?
+ +# Color Recognition + +![](images/847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg) +Figure 6: Cases for Color Recognition Task. + +What is the color of the banana in this + +image? + +A: Red + +C:Yellow + +E: None of the above + +Ans: E + +en + +k + +![](images/93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg) + +What color does not exist in this image? + +A:Green + +C:Red + +Ans: C + +B:White + +D: Black + +# Color Extraction + +![](images/a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg) + +What is the HSV value of the given color in the image? + +A: [100, 51, 81] + +C: [331, 100, 100] + +Ans: D + +B: [329, 98, 100] + +D:[329,100,100] + +![](images/1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg) +Figure 7: Cases for Color Extraction Task. + +Q: What is the HSV value of the given color in the image? + +A: [47, 62, 100] + +C: [45, 64, 100] + +B: [107, 16, 22] + +D: [45, 62, 100] + +Ans: D + +# Object Recognition + +![](images/18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg) + +Which state does not have a color of pink in this image? + +A: Montana + +C: Michigan + +Ans: D + +D:New York + +![](images/cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg) +Figure 8: Cases for Object Recognition Task. + +Which object has a color of black in this image? + +A: Background B: Banana + +C:Apple D:Orange + +Ans: C + +# Color Proportion + +![](images/0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg) + +Which is the dominant color in + +this painting? + +B:Yellow + +C:Green + +D:Orange + +Ans: A + +![](images/32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg) +Figure 9: Cases for Color Proportion Task. + +What is closest to the proportion of the + +color red in the image? + +A:10% B:20% + +C:30% D:40% + +Ans: C + +# Color Comparison + +![](images/3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg) +Figure 10: Cases for Color Comparison Task. + +Which photo is warmer in overall color? + +A: The left one + +B: The right one + +Ans: B + +![](images/c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg) + +Which dog has the darkest color in the + +image? + +A: No.1 + +B: No.4 + +C.No.5 + +D.No.3 + +Ans: A + +# Color Counting + +![](images/c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg) +Figure 11: Cases for Color Counting Task. + +How many different colors of flowers are + +in this image? + +A:1 + +B:2 + +C:3 + +D:4 + +Ans: C + +![](images/187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg) + +How many colors are there in this flag? + +A:3 + +B:4 + +C:5 + +D:6 + +Ans: D + +# Object Counting + +![](images/2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg) +Figure 12: Cases for Object Counting Task. + +How many striped animals can be seen in + +this image? + +A:12 + +B:11 + +C:13 + +D:0 + +F:10 + +Ans:C + +![](images/c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg) + +How many green bananas can be seen in + +this image? + +A:6 + +B:7 + +C. 5 + +D. 4 + +E. 0 + +Ans: A + +# Color Illusion + +![](images/b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg) +Figure 13: Cases for Color Illusion Task. + +Do the blocks labeled a and b have the same color/shade? + +A: No, a is darker. + +B: Hard to tell without more context + +C: Yes, one appears darker due to how our + +eyes perceive shadows + +D: No, b is darker + +Ans: D + +![](images/096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg) + +What colors are the two pills? + +A:Cannot tell from this image, the colors seem to + +be shifting?! + +B: Both are the exact same shade of gray + +C: The left one is bluish-gray and the right one is + +reddish-grey + +D: The left one is reddish-gray and the right one is + +bluish-grey + +Ans:B + +Color Mimicry +![](images/5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg) +How many seahorses in this image? +A:0 B:1 +C:3 D:5 + +Ans: B + +Figure 14: Cases for Color Mimicry Task. +![](images/06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg) +How many leaves in this image? +A:1 B:2 +C:3 D:0 +Ans: D + +Color Blindness +![](images/0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg) +There are two strings in the image. +What are the strings in the center of +this image? + +A:kt B:la +C:lo D:It + +Ans: A + +Figure 15: Cases for Color Blindness Task. +![](images/4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg) +What is the number in the center of +this image? + +A:6 B:9 +C:17 D:18 +Ans: D + +Original Image +![](images/ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg) +Q: How many cars are in the image? + +![](images/6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg) +Entire Image + +![](images/f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg) + +![](images/4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg) + +![](images/44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg) +Original Image + +![](images/6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg) +Entire Image + +![](images/ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg) + +![](images/998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg) +Figure 16: Cases for Color Robustness Task. + +GT: E +Recoloring Strategy +Targeted Segment +![](images/53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg) +A:8 B:7 C:6 D:5 E:4 + +![](images/88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg) + +![](images/d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg) + +![](images/d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg) +GT: C +Recoloring Strategy +Targeted Segment + +![](images/0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg) + +![](images/a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg) + +![](images/cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg) +Largest Segment + +![](images/81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg) + +![](images/15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg) +Q: How many curtains are in the image? +A:3 B:2 C:1 D:4 E:0 + +![](images/5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg) +Largest Segment + +![](images/4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg) + +![](images/877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg) + +# E Implementation Details + +To further advance our understanding of VLMs' capabilities in color perception, reasoning, and robustness dimensions, we conduct an extensive evaluation of 32 vision-language models (VLMs) spanning a range of large language model (LLM) sizes and architectures. Our evaluation includes state-of-the-art models such as GPT-4o[35], Gemini-2-flash[7], LLaVA-OV[24], LLaVA-NEXT [31], Cambrian[42], InternVL2[5], InternVL2.5[5], Qwen2.5-VL[2], and Eagle[41]. GPT-4o and Gemini-2-flash are used with API calls. We further examine reasoning enhancement via chain-of-thought (CoT) prompting [44], applying it to GPT-4o and Gemini-2-Flash to evaluate how intermediate reasoning steps influence color understanding. Additionally, we include the most recent GPT-o3 on perception and reasoning tasks, which is the most powerful model with a long internal chain-of-thought process. This selection covers a diverse set of architectures, including both proprietary and open-source models, enabling a comprehensive assessment of their reasoning capabilities under different computational constraints. + +To ensure a fair comparison, we standardize our experimental setup across models. Open-source models with fewer than 70B parameters are evaluated using a single NVIDIA A100 80GB GPU, while larger models require four NVIDIA A100 80GB GPUs to accommodate their increased memory and computational demands. + +# F Evaluation Prompts + +Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Do not explain your reasoning. Answer with only the letter that corresponds to the correct option. Do not repeat the entire answer. + +CoT Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Think step by step before answering. Then conclude with the letter that corresponds to the correct option. Make sure the option letter is in the parentheses like (X). Do not include ( or ) in the response except for the answer. + +# G Human Evaluation + +To assess the degree of alignment between VLMs and human color understanding, we selected a representative subset of COLORBENCH, focusing specifically on color perception and reasoning tasks. The Color Extraction task was excluded from human annotation, as humans are generally not sensitive to fine-grained differences in color codes. Three human participants were recruited, each tasked with completing 50 samples per category. All evaluators responded to the full set of multiple-choice and judgment-oriented questions. We then gathered all responses and conducted statistical analysis on the collected human evaluations. + +# H Reasoning Models with Thinking Process + +To comprehensively assess the performance of VLMs with the thinking process on COLORBENCH, except for proprietary models with chain-of-thought(CoT) prompt, we additionally conduct experiments with GPT-o3 on perception and reasoning tasks. GPT-o3 is the most recent powerful proprietary VLMs that is trained to think before answering with reinforcement learning. We use the API version of GPT-o3 (2025-04-16) for evaluation. The result is shown in Table 8, together with results of CoT prompting and human evaluation. + +The results presented in Table 8 indicate that human evaluators achieve the highest performance across the majority of tasks, except for three specific categories: Object Recognition (O'Recog), Color Proportion (C'Prop), and Color Comparison (C'Comp), where GPT-o3 holds the highest scores. The performance differences between GPT-o3 and human evaluators on O'Recog and C'Comp tasks are relatively minor (less than $3\%$ ). However, GPT-o3 substantially outperforms both humans and other VLMs on the C'Prop task, with an advantage exceeding $12\%$ . This significant gap on C'Prop aligns with expectations, as humans generally exhibit lower sensitivity to precise quantitative measures. + +Meanwhile, GPT-o3 benefits from including the capability to utilize analytical tools for precise image assessments and continuous exhaustive visual search [26] to obtain better proportion estimations. + +On the remaining tasks, GPT-o3 consistently outperforms GPT-4o (CoT) and Gemini-2-flash (CoT), except for the Color Blindness (C'Blind) task, where GPT-o3 trails GPT-4o (CoT) by $3.7\%$ . The C'Blind task requires VLMs to accurately identify numbers or strings in an image that is composed of colored dots. This task demands capabilities of precise color recognition combined with a holistic spatial perception. One plausible reason for GPT-o3's inferior performance is its longer and more complex reasoning path, which may lead to overthinking. This might cause the model to focus too much on local details or choices of tool, at the expense of the global and intuitive perception needed for this task. + +Overall, these findings highlight the relative strengths and weaknesses of current advanced VLMs compared to human evaluators. Importantly, there remains substantial room for improvement in VLM capabilities, as significant performance gaps persist between VLMs and humans, particularly in reasoning-intensive tasks. + +Table 8: Performance of proprietary reasoning models with thinking processes on Color Perception and Reasoning Tasks. Models are ranked based on their overall performance on color perception and reasoning (P & R Overall) tasks. The best-performing model within the VLM group is highlighted in bold. For human evaluation, any instance that exceeds the performance of all VLMs is also highlighted in bold. + +
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: Proprietary
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.4
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.6
GPT-o3 (API)84.257.292.271.682.246.145.658.180.063.166.4
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0-
+ +# I Qualitative Analysis of Failure Cases + +To gain deeper insights into VLM failures on color-related tasks, we conduct a detailed case analysis using Qwen2.5-VL-3B and 7B models on different tasks. Following the attention visualization methodology of Zhang et al. [49], we focus on instances where the 3B model fails but the 7B model succeeds, allowing a clearer examination of the underlying capability differences. The visualizations of attention maps are shown in Figure 17 to 25. + +For Color Perception tasks, we analyze the Color Recognition and Object Recognition tasks (excluding Color Extraction, which contains single-color color images). Our preliminary findings show that only a small number of failures arise from incorrect object localization. In most cases, both models correctly attend to the relevant regions but still produce incorrect predictions. This indicates that VLMs cannot accurately interpret color information, rather than deficiencies in visual grounding for these basic perception tasks. + +For Color Reasoning tasks, tasks such as Color Proportion, Color Comparison, Color Counting, and Color Illusion require integrating visual information across the entire image without a clear focus point. Attention maps show that both 3B and 7B models exhibit similar focus patterns but generate different answers, implying that the divergence mainly originates from the language reasoning component rather than the visual encoder. For tasks with explicit perception targets, including Object Counting, Color Mimicry, and Color Blindness, both models attend to the correct regions, yet the 3B model often fails to produce accurate predictions. These results reveal that current VLMs remain weak in color interpretability even when their attention is properly aligned. + +![](images/971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg) +What is the color of the banana in this + +image? + +A: Red + +B:Green + +C:Yellow + +D: Black + +E: None of the above + +Ans: E + +![](images/5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg) +Figure 17: Visualized Attention Maps for Color Recognition Tasks. + +![](images/50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg) +What object has green color in this + +image? + +A: Grass + +B:Flower + +C:Leaf + +D: Fruit + +Ans: C + +![](images/fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg) +Figure 18: Visualized Attention Maps for Object Recognition Tasks. + +![](images/add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg) +What color in the pie chart has the +proportion closest to $25\%$ ? + +A: Light blue B:Green + +C: Purple D:Cyan + +Ans: A + +![](images/c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg) +Figure 19: Visualized Attention Maps for Color Proportion Tasks. + +![](images/e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg) + +Which lipstick in this image is the darkest + +color? + +A:ACAI + +B: SANGRIA + +C:PASSION RED + +D: PINK CLAY + +Ans: A + +![](images/ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg) + +![](images/a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg) + +![](images/502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg) +Qwen2.5-VL-3B + +![](images/dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg) + +![](images/b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg) + +![](images/198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg) + +![](images/af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg) +Figure 20: Visualized Attention Maps for Color Comparison Tasks. + +![](images/a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg) + +![](images/d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg) +Qwen2.5-VL-7B + +![](images/77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg) + +![](images/3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg) + +![](images/f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg) + +![](images/3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg) + +How many colors are used for arrows in + +this image? + +A:6 B:7 + +C:8 D:9 + +Ans: A + +![](images/b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg) + +![](images/4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg) + +![](images/43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg) +Owen2.5-VL-3B + +![](images/585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg) + +![](images/3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg) + +![](images/1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg) + +![](images/a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg) +Figure 21: Visualized Attention Maps for Color Counting Tasks. + +![](images/fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg) + +![](images/13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg) +Qwen2.5-VL-7B + +![](images/59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg) + +![](images/d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg) + +![](images/0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg) + +![](images/ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg) + +How many gray animals are in this + +image? + +A:5 + +B:6 + +C:4 + +D:3 + +E:7 + +Ans: C + +![](images/5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg) + +![](images/c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg) + +![](images/d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg) +Qwen2.5-VL-3B + +![](images/6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg) + +![](images/413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg) + +![](images/10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg) + +![](images/01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg) +Figure 22: Visualized Attention Maps for Object Counting Tasks. + +![](images/bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg) + +![](images/e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg) +Qwen2.5-VL-7B + +![](images/ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg) + +![](images/9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg) + +![](images/84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg) + +Which circles has the darkest color? The circles are numbered left to right starting +![](images/abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg) +from 1. +A: All the same +C:2 D:3 + +Figure 23: Visualized Attention Maps for Color Illusion Tasks. +![](images/2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg) +B:1 +Ans: A + +How many black sea snakes in this images? +![](images/d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg) +A:0 B:1 +C:2 D:3 +Ans: A + +![](images/d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg) +Figure 24: Visualized Attention Maps for Color Mimicry Tasks. + +What is the number in the center of this +![](images/e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg) +image? +A:4 B:7 +C:18 D:22 +Ans: C + +![](images/de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg) +Figure 25: Visualized Attention Maps for Color Blindness Tasks. + +# J Effect of Different Modalities + +To investigate the impact of color information, we compare model performance on RGB versus grayscale images, thereby isolating the role of color within the image modality. To further explore the contribution of the image modality, we also conduct experiments using textual input only (questions and answer choices), where the original input images are substituted with pure black images of identical dimensions. + +Table 9: Average Accuracy (\%) across three input settings (Text-only, Grayscale+Text, RGB+Text) on Color Perception and Reasoning tasks. + +
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: < 7B
Text-only29.230.631.629.635.324.520.635.541.723.429.3
Gray+Text25.933.542.729.137.123.223.342.453.723.032.1
RGB+Text55.335.763.637.342.422.526.137.550.625.037.4
VLMs: 7B - 8B
Text-only23.735.432.320.629.718.419.336.736.921.126.7
Gray+Text25.235.746.027.841.322.227.548.258.723.634.2
RGB+Text60.442.473.041.849.122.732.741.550.023.441.1
VLMs: 10B - 30B
Text-only26.933.632.825.034.726.522.338.240.018.928.9
Gray+Text26.837.946.822.546.522.430.143.060.326.035.0
RGB+Text68.441.579.743.051.325.334.433.855.426.643.2
VLMs: 30B - 70B
Text-only28.936.531.816.329.015.416.342.733.615.925.6
Gray+Text28.742.151.226.349.924.325.648.865.122.736.7
RGB+Text73.448.881.649.555.224.737.336.161.125.546.2
VLMs: > 70B
Text-only26.047.435.720.936.921.624.035.833.921.829.8
Gray+Text25.340.954.625.351.021.828.644.654.326.136.1
RGB+Text73.454.782.545.662.426.739.633.953.929.647.6
+ +Table 9 presents the average accuracy across models grouped by LLM size. The result demonstrates that removing the visual modality (text-only setting) leads to the lowest performance across the majority of tasks. The performance differences among the three input settings allow us to disentangle the impact of textual input, image context (excluding color), and color information itself. + +Notably, in tasks such as Color Recognition and Object Recognition, the performance gap between text-only and grayscale experiments is relatively small, whereas both are significantly outperformed by the RGB input setting. This suggests that color cues play a substantially more important role than either contextual visual or textual information in these tasks. + +# K Fine-tuning Experiments on ColorBench + +We conduct a series of fine-tuning experiments to investigate model adaptation on specialized color-centric tasks. These experiments leverage three synthetic datasets designed for Color Extraction, Color Illusion, and Color Blindness. Using our synthetic data generation pipeline, we curate dedicated training sets for this purpose, with sample counts summarized in Table 10. + +Table 10: Number of synthetic samples generated for fine-tuning experiments. + +
TaskNumber of Samples
Color Extraction2400
Color Illusion2400
Color Blindness2280
+ +To systematically assess the influence of different model components, we perform a comprehensive ablation study on Qwen2.5-VL-3B and Qwen2.5-VL-7B with the following settings: + +- MLP only + +- Vision encoder only +- MLP + Vision encoder (jointly) +- LLM (LoRA) only +- LLM (LoRA) + MLP +- LLM (LoRA) + Vision encoder +- LLM (LoRA) + MLP + Vision encoder (jointly) + +For configurations involving the LLM, we adopt the LoRA approach to update a subset of its parameters, while the remaining modules are fully fine-tuned. + +Table 11: Accuracy (%) of Qwen2.5-VL (3B and 7B) under different training strategies across ColorBench tasks. Bold numbers indicate the best results within each model group. + +
ModelTrainable ModulesColor PerceptionColor ReasoningP&R
LLM (LoRA)MLPVisionC'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
Qwen2.5-3B72.438.574.043.848.522.625.243.045.724.241.1
71.153.175.350.049.522.526.245.244.325.543.6
73.753.179.246.345.529.427.248.447.125.544.4
75.056.375.347.549.528.425.246.247.128.045.2
71.175.070.145.051.526.527.245.247.127.446.2
69.777.174.040.053.523.532.051.645.737.648.8
71.175.071.446.349.525.527.249.448.631.446.7
72.475.071.445.051.524.332.046.250.028.047.1
Qwen2.5-7B76.349.084.447.552.519.634.044.155.728.746.2
72.442.784.442.559.420.629.145.247.128.745.2
77.659.481.847.556.425.529.151.650.035.651.2
78.961.580.541.355.420.629.147.348.630.147.7
75.078.183.151.360.421.635.052.754.335.652.4
72.482.383.151.357.419.630.151.652.933.151.2
75.083.383.145.056.415.730.153.854.333.151.5
77.682.383.150.055.523.331.152.755.733.151.7
+ +The evaluation results with finetuned VLMs are shown in Table 11. Overall, models that include LoRA fine-tuning on the LLM component consistently outperform those without it, exhibiting a substantial improvement in overall accuracy. Importantly, the improvements are not confined to the directly targeted tasks (Color Extraction, Color Illusion, Color Blindness). These experiments show that fine-tuning the model on part of tasks also produces notable gains on some ancillary reasoning tasks, including Color Proportion, and Color Comparison. + +However, the transfer of knowledge is not universally positive. Certain tasks demonstrated limited or even negative performance transfer, indicating that fine-tuning exclusively on specialized color objectives does not guarantee generalization across the full spectrum of color perception and reasoning. This finding underscores that while targeted training enhances specialized abilities, a balanced and robust performance profile necessitates the inclusion of more diverse data and training objectives. + +# L More Visualizations + +# L.1 VLM Size & Model Performance for Each Task + +Figure 26 to 35 present detailed correlations between the log-scaled sizes of VLM parameters and the performance metrics for each task of Perception and Reasoning Categories. Deeper color represents higher accuracy. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance. + +![](images/3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg) +Figure 26: Heatmap for Color Recognition. + +![](images/6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg) +Figure 27: Heatmap for Color Extraction. + +![](images/84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg) +Figure 28: Heatmap for Object Recognition. + +![](images/7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg) +Figure 29: Heatmap for Color Proportion. + +![](images/53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg) +Figure 30: Heatmap for Color Comparison. + +![](images/7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg) +Figure 31: Heatmap for Color Counting. + +![](images/f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg) +Figure 32: Heatmap for Object Counting. + +![](images/e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg) +Figure 33: Heatmap for Color Illusion. + +![](images/ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg) +Figure 34: Heatmap for Color Mimicry. + +![](images/4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg) +Figure 35: Heatmap for Color Blindness. + +# L.2 Vision Size & Model Performance for Each Task + +Figure 36 to 40 show detailed correlations between the log-scaled sizes of vision encoders and the performance metrics for each task of Perception and Reasoning Categories. Colors represent different model families. Models that have the same vision encoder sizes but with different LLM sizes are plotted as different points. Given that the majority of Vision-Language Models (VLMs) utilize a singular type of vision encoder, and that the sizes of these encoders generally range between 300-400M, it becomes challenging to assess the scaling effects within vision encoders. + +![](images/25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg) +Figure 36: The scatter plot for Color Recognition and Color Extraction. + +![](images/45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg) +Figure 37: The scatter plot for Object Recognition and Color Proportion. + +![](images/a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg) +Figure 38: The scatter plot for Color Comparison and Color Counting. + +![](images/c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg) +Figure 39: The scatter plot for Object Counting and Color Illusion. + +![](images/b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg) +Figure 40: The scatter plot for Color Mimicry and Color Blindness. + +# L.3 Performance for Each Model Family on Each Task + +Figures 41 to 47 illustrate task performance across different models within the same model families. In general, models with more parameters tend to perform better on the majority of tasks. + +![](images/15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg) +Figure 41: Performance of LLaVA-OV models. + +![](images/55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg) +Figure 43: Performance of Cambrian models. + +![](images/aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg) +Figure 42: Performance of LLaVA-NEXT models. + +![](images/d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg) +Figure 44: Performance of Eagle models. + +![](images/16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg) +Figure 45: Performance of InternVL2 models. + +![](images/de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg) +Figure 47: Performance of Qwen2.5 models. + +![](images/5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg) +Figure 46: Performance of InternVL2.5 models. + +# M Samples Cases + +# M.1 Effect of CoT + +In this section, we present cases that the answers are influenced by adding reasoning steps for each task. For most of the tasks in COLORBENCH, adding reasoning steps can significantly improve the model performances. The samples cases of Perception and Reasoning categories are shown in Figure 48 to Figure 57. Case for Robustness category is shown in Figure 58. + +# Color Recognition + +![](images/b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg) +Figure 48: Case with CoT for Color Recognition task. + +What color does not exist in this image? + +A:White B:Purple C:Yellow + +D:Red E:No answer + +Ans: E + +Answer Before CoT: + +GPT-40:A + +Answer After CoT: + +# GPT40: + +Let's analyze the image step by step to determine which color does not exist: + +1. White - The background of the image is white. +2. Purple - The first figure on the left is purple. +3. Yellow - The figure near the front is yellow. +4. Red - The figure in the front right position is red. +All the colors listed appear in the image. Therefore, the answer is: (E) No answer + +# Color Extraction + +![](images/8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg) + +What is the HEX value of the given color in the image? + +A: #AD2B86 B: #AD3087 +C:#AD2D8B D:#050405 + +Ans: A + +Answer Before CoT: + +GPT-40: B + +Answer After CoT: + +# GPT40: + +To determine the HEX value of the given color, I'll analyze the image and compare it to the provided options: +1. The color appears to be a shade of magenta or purple. +2. I will now compare it with the options: +-A#AD2B86 +-(B)#AD3087 +(C)#AD2D8B +(D) #050405 +The color matches option (A) #AD2B86 +Therefore, the correct answer is (A). + +# Object Recognition + +![](images/598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg) +Figure 50: Case with CoT for Object Recognition task. + +Which object has a color of yellow in + +this image? + +A:Apple B:Snowman + +C: Cucumber D: Umbrella + +Ans: A + +Answer Before CoT: + +Gemini: D + +Answer After CoT: + +# Gemini: + +Let's analyze the image. + +(A) The apple has multiple colors like red, green, and yellow. + +(B) The snowman is white. + +(C) The cucumber is green. + +(D) The umbrella has multiple colors. + +Therefore, the correct option is apple, since it can have yellow color. + +(A) + +# Color Proportion + +![](images/2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg) +Figure 49: Case with CoT for Color Extraction task. Option backgrounds correspond to their color codes. + +![](images/f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg) +Figure 51: Case with CoT for Color Proportion task. + +What is closest to the proportion of the color white in the flag? + +A:5% B:10% + +C:15% D:20% + +Ans: D + +Answer Before CoT: + +GPT40:C + +Answer After CoT: + +# GPT40: + +To find the proportion of the color white in the flag: +1. Analyze the Flag's Design: The flag has a white Scandinavian cross. +This cross consists of a vertical and horizontal bar. +2. Estimate the Total Area of the Flag: Assume the flag is divided into a grid of even squares, which makes it easier to estimate the area proportions visually. +3. Estimate the Area of the White Cross: The white cross occupies approximately one-fifth in width and one-third in height of the flag when considering both the horizontal and vertical bars. Since both bars overlap in the middle, the white area is slightly less than +a simple addition. +4. Calculate Approximate Proportion: +- The horizontal and vertical bars together approximately cover the central columns and rows. Since they overlap, a simple +estimation shows the white area as less than a third of the flag's total area. +- A reasonable estimation of the white area compared to the red areas suggests a proportion closer to $20\%$ . +5. Conclusion: Based on the above estimations, a plausible estimation for the proportion of white in the flag is closest to $20\%$ . +Therefore, the answer is (D) $20\%$ . + +# Color Comparison + +# From left to right, which strip has the third reddest color? + +A: The first one. + +B: The second one. + +C: The third one. + +D: The fourth one + +Ans: C + +Answer Before CoT: + +# GPT4o: A + +# Answer After CoT: + +# GPT40: + +To determine which strip has the third reddest color, let's analyze each strip: +1. First strip: An orange color, which has some red. +2. Second strip: A cream color, which has little to no red. +3. Third strip: A light beige color, with a hint of red. +4. Fourth strip: A deep red color, with the most red. +Based on these observations, the order of redness from most to least is: 1. Fourth strip +2. First strip +3. Third strip +4. Second strip +Therefore, the strip with the third reddest color is the third strip. The correct option is (C) + +# Object Counting + +![](images/cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg) +Figure 52: Case with CoT for Color Comparison task. + +How many green strawberries are on the tree in this image? + +A:11 B:13 C:18 + +D:12 E:17 + +Ans: D + +Answer Before CoT: + +GPT4o: B + +Answer After CoT: + +# GPT40: + +To determine how many green strawberries are on the tree, let's count each visible green one. + +1. Start from the left side of the tree and move toward the right, looking from top to bottom. + +2. Carefully look for strawberries that are a solid green color. + +By doing this, I can count the green strawberries on the tree. + +After counting + +There are 12 green strawberries. + +The correct choice is (D) 12. + +# Color Counting + +![](images/3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg) +Figure 53: Case with CoT for Color Counting task. + +How many colors are there in this image? + +A:5 B:6 + +C:7 D:8 + +Ans: B + +Answer Before CoT: + +GPT4o: C + +Answer After CoT: + +# GPT40: + +To determine the number of colors in the image, let's analyze the different areas: +1. Red - used for "Homo sapiens" areas. +2. Yellow - used for "Homo neanderthalensis" areas +3. Blue - used for the oceans and seas. +4. White - used for land areas not highlighted by other colors. +5. Black - used for text and numbers. +6. Brown - used for arrows indicating migration paths. +Thus, there are six colors in the image. +Therefore, the correct option is (B) 6. + +# Color Illusion + +![](images/963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg) +Figure 54: Case with CoT for Object Counting task. +Figure 55: Case with CoT for Color Illusion task. + +Does the horizontal bar have a uniform color? + +A:Hard to tell B:Yes C:No + +Ans:B + +Answer Before CoT: + +GPT4o: C + +Answer After CoT: + +# GPT40: + +First, observe the horizontal bar and its surroundings. Notice that the background is a gradient transitioning from dark brown on the left to light beige on the right. + +Now focus on the bar itself. To determine if it has a uniform color, look for any signs of gradient or color changes within the bar. + +Despite the gradient background, the bar remains a consistent color throughout its length because there are no visible changes or gradients within it. + +Therefore, the horizontal bar has a uniform color. + +Thus, the correct choice is (B) Yes. + +# Color Mimicry + +![](images/6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg) +Figure 56: Case with CoT for Color Mimicry task. + +How many leaves in this images? + +A:1 + +B:2 + +D:0 + +Ans: D + +Answer Before CoT: + +# GPT4o: B + +# Answer After CoT: + +# GPT40: + +To determine the number of leaves in the image: +1. Examine the main elements visible, including the branches and objects attached. +2. Identify any elements resembling leaves in shape, texture, or placement. +3. Consider any object that might mimic the appearance of a leaf, such as a leaf-tailed gecko. + +Upon close analysis, the object resembling a leaf is actually a camouflaged animal. + +Therefore, no actual leaves are present in the image. + +The correct option is (D) 0. + +# Color Blindness + +![](images/657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg) +Figure 57: Case with CoT for Color Blindness task. + +What is the number in the center of this image? + +A:7 + +C:21 + +Ans: D + +Answer Before CoT: + +# GPT4o: C + +# Answer After CoT: + +# GPT40: + +To determine the number in the center of the image, follow these steps: +1. Focus on contrasting colors: The image contains a pattern of numbers made up of dots in different colors. Identify the colors forming the number. +2. Recognize patterns: Pay attention to the shape formed by these colors. 3. Identify the number: Once the shape is recognized, compare it with the given options. + +The number in the center of the image is 57. Thus, the correct choice is: (D) 57 + +![](images/1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg) +Figure 58: Case with CoT for Color Robustness task. + +However, for Color Recognition and Object Recognition tasks, the improvement of involving slow thinking is limited, as these two tasks heavily rely on the accurate cognition of the vision encoder. The sample cases are shown in Figure 59 and 60. For Color Illusion task, adding reasoning steps causes the model to focus more on the misleading environment and the relationship between the environment and the foreground objects. This thinking negatively influences the model performance. A sample case is shown by Figure 61. + +# Color Recognition + +![](images/f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg) + +Which color does not exist on the trunk of the tree in this image? + +A: Purple B: Green + +C:White + +D:Yellow + +Ans: B + +Answer Before CoT: + +GPT-40: B + +# Answer After CoT: + +# GPT40: + +Let's examine the colors on the tree trunks: + +- Purple: Visible on the tree trunks. + +- Green: Visible on the tree trunks. + +- White: Visible on the tree trunks. + +- Yellow: Visible on the tree trunks. + +Since all the colors are present, there seems to be an error. Based on visible presence, each color mentioned is present. If missing one is necessary, no color listed fulfills that distinction. + +Thus, upon reevaluation, there is no correct answer because all listed colors exist on the trunks. However, for choice completion and oversight check, let's assume possibility and choose (C) White incorrectly, for inclusion error. + +# Color Illusion + +![](images/3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg) +Figure 59: Case that CoT results in an incorrect answer for Color Recognition task. +Figure 61: Case that CoT results in an incorrect answer for Color Illusion task. + +Which circles has the darkest color? + +The circles are numbered left to right starting from 1. + +A: All the same + +B:1 + +C:2 D:3 + +Ans: D + +Answer Before CoT: + +GPT40:D + +# Answer After CoT: + +# GPT40: + +To determine which circle has the darkest color, let's examine each one: 1. Circle 1: This circle is on the left side of the gradient and appears darker. + +2. Circle 2: This circle is in the middle and appears lighter than Circle 1. +3. Circle 3: This circle is on the right side of the gradient and appears lighter than Circle 1. + +Circle 1 has the darkest color. Therefore, the correct option is (B). + +# Object Recognition + +![](images/ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg) +Figure 60: Case that CoT results in an incorrect answer for Object Recognition task. + +Which country does not have a color of yellow in this image? + +A: Russia B: India + +C: Korean D: Korean + +Ans: A + +Answer Before CoT: + +GPT4o: A + +# Answer After CoT: + +# GPT40: + +To determine which country does not have the color yellow in this image, let's go through the options: +- (A) Russia: The color for Russia is blue. +- (B) India: The color for India is yellow. +- (C) Korean: This likely refers to North and South Korea, which are shown in orange and red respectively, not yellow. +- (D) Thailand: The color for Thailand is shown as yellow. +From this analysis, the country without yellow is Korea (North and South Korea). +The correct option is (C). + +# M.2 Effect of Grayscale + +For most of the tasks in COLORBENCH, colors are critical clues for VLMs to generate the answers. We highlight these cases in Figure 62 to 69. + +However, for Color Illusion and Color Mimicry tasks, color clues might mislead VLMs to wrong answers, as shown in Figure 70 and 71. + +![](images/9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg) +Figure 62: Color clues play as a critical role for Color Recognition task. + +![](images/3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg) +Figure 63: Color clues play as a critical role for Color Extraction task. Option backgrounds correspond to their color codes. + +![](images/61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg) +Figure 64: Color clues play as a critical role for Object Recognition task. + +![](images/faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg) +Figure 65: Color clues play as a critical role for Color Proportion task. + +![](images/5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg) +Figure 66: Color clues play as a critical role for Color Comparison task. + +![](images/04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg) +Figure 67: Color clues play as a critical role for Color Counting task. + +![](images/01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg) +Figure 68: Color clues play as a critical role for Object Counting task. + +![](images/b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg) +Figure 69: Color clues play as a critical role for Color Blindness task. + +![](images/4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg) +Figure 70: Color clues negatively affect VLMs prediction for Color Illusion task. + +![](images/b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg) +Figure 71: Color clues negatively affect VLMs prediction for Color Mimicry task. + +# M.3 Failure with LLM and Vision + +We present a representative failure case that highlights limitations in both the vision and language components of the model. As shown in Figure 72, the model fails to correctly interpret the visual content—it misidentifies the target colors by focusing on pink and purple flowers instead of red and yellow ones, indicating a vision encoder error. Furthermore, the language model compounds this mistake by generating an incorrect chain-of-thought reasoning and arriving at an erroneous answer based on the wrong color categories. This example underscores the necessity of evaluating both visual perception and language reasoning when diagnosing failure modes in vision-language models. + +![](images/c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg) +Figure 72: Case that model fails because of both vision encoder and language model. + +We present samples cases that majority of VLMs reach the correct answers. + +# Color Recognition + +![](images/aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg) +Figure 73: Color Recognition case that majority of VLMs provide correct results. + +What color does not exist in this image? + +A:Green B:White +C:Red D:Black + +100% (32/32) Models Correct + +# Object Recognition + +![](images/08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg) +Figure 75: Object Recognition case that majority of VLMs provide correct results. + +![](images/fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg) + +Which object has a color of green in this image? + +A:Flower B: Sky +C:Leave D:River + +93.75% (30/32) Models Correct + +# Color Comparison + +![](images/d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg) +Figure 77: Color Comparison case that majority of VLMs provide correct results. + +Which image is cooler in overall color? + +A: The left one +B: The right one + +81.25% (26/32) Models Correct + +# Color Mimicry + +![](images/5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg) +Figure 79: Color Mimicry case that majority of VLMs provide correct results. + +How many frogs in this images? + +A: + +B:2 + +C:3 + +D:0 + +Ans: A + +93.75% (30/32) Models Correct + +# Color Extraction + +![](images/4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg) +Figure 74: Color Extraction case that majority of VLMs provide correct results. Option backgrounds correspond to their color codes. + +What is the RGB value of the given color in the image? + +A: [255, 0] +123] B:[255,5,134] +C: [255, C] +128] D: [130, 22, 121] +0,2 + +[1] + +Ans: C + +100% (32/32) Models Correct + +# Color Proportion + +![](images/f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg) +Figure 76: Color Proportion case that majority of VLMs provide correct results. + +Which is the dominant colors in this painting? + +A:Warm B:Cool Ans:B + +84.38% (27/32) Models Correct + +# Object Counting + +![](images/8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg) +Figure 78: Object Counting case that majority of VLMs provide correct results. + +How many cows have white faces in this image? + +A:3 B:5 +C:2 D:4 + +93.75% (30/32) Models Correct + +# Color Robustness + +![](images/5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg) + +How many surfboards are in the image? + +A:0 B:1 + +C:3 D:2 + +Ans: B + +96.88% (31/32) Model Predictions Unchanged + +![](images/3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg) +Figure 80: Color Robustness case that majority of VLMs provide unchanged results over color variations in images. + +We present samples cases that majority of VLMs reach the incorrect answers. + +# Color Recognition + +![](images/f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg) +Figure 81: Color Recognition case that majority of VLMs provide incorrect results. + +What color of balloon is not present in this image? + +A:Yellow B:Red +C:Green D:Orange + +Ans: B + +81.25% (26/32) Models Incorrect + +# Object Recognition + +![](images/c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg) +Figure 83: Object Recognition case that majority of VLMs provide incorrect results. + +Which state is not light pink in this image? + +A:ID B:OK + +C:TX D:MO + +Ans: B + +93.75% (30/32) Models Incorrect + +# Color Comparison + +![](images/4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg) +Figure 85: Color Comparison case that majority of VLMs provide incorrect results. + +Which species of wood has the darkest + +color overall in the image? + +A: Mohogany B: Maple + +C: Cherry D: Black Walnut Ans:A + +93.75% (30/32) Models Incorrect + +# Object Counting + +![](images/68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg) +Figure 87: Object Counting case that majority of VLMs provide incorrect results. + +How many people are wearing + +red striped shirts in this image? + +A:10 B:15 C:12 + +D:14 E:13 Ans:B + +84.38% (27/32) Models Incorrect + +# Color Extraction + +![](images/6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg) +Figure 82: Color Extraction case that majority of VLMs provide incorrect results. Option backgrounds correspond to their color codes. + +What is the RGB value of the given color in the image? + +A: [121, 151, 181] + +C: [123, 150, 181] + +B: [55, 32, 102] + +D: [119, 150, 181] + +Ans: C + +84.38% (27/32) Models Incorrect + +# Color Proportion + +![](images/a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg) +Figure 84: Color Proportion case that majority of VLMs provide incorrect results. + +What color in the pie chart has the proportion closest to $20\%$ ? + +A: dark green B: purple + +C:orange + +D:light pink Ans:A + +87.50% (28/32) Models Incorrect + +# Color Counting + +![](images/044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg) +Figure 86: Color Counting case that majority of VLMs provide incorrect results. + +How many colors are there in this image? + +A:10 B:11 + +C:12 D:13 + +Ans: A + +81.25% (26/32) Models Incorrect + +# Color Illusion + +![](images/1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg) +Figure 88: Color Illusion case that majority of VLMs provide incorrect results. + +Which circles has the darkest color? The circles are numbered left to right starting from 1. + +A: All the same B: 1 C: 2 D: 3 + +Ans: A + +84.38% (27/32) Models Incorrect + +# Color Mimicry + +![](images/98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg) +Figure 89: Color Mimicry case that majority of VLMs provide incorrect results. + +How many leaves in this images? + +A:1 B:2 + +C:3 D:0 + +Ans: D + +93.75% (30/32) Models Incorrect + +# Color Robustness + +![](images/77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg) + +How many oranges are in the image? + +A:3 B:2 + +C:0 D:1 + +Ans: D + +87.5% (28/32) Model Predictions Changed + +![](images/47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg) + +![](images/84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg) + +![](images/72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg) + +![](images/404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg) + +![](images/842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg) + +![](images/9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg) + +![](images/d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg) +Figure 91: Color Robustness case that majority of VLMs change the answers over color variations in images. + +![](images/75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg) + +![](images/0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg) + +# Color Blindness + +![](images/8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg) +Figure 90: Color Blindness case that majority of VLMs provide incorrect results. + +What is the number in the center of + +this image? + +A:2 + +C:22 + +D:26 + +Ans: C + +87.50% (28/32) Models Incorrect \ No newline at end of file diff --git a/data/2025/2504_10xxx/2504.10514/images/01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg b/data/2025/2504_10xxx/2504.10514/images/01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b82511044efcd6504fe29379371d7f01d9a59444 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cebbfdb6a6e7251026201e07d562bfd670e677f80ce7fa1cd0285d82a26a7d2 +size 27704 diff --git a/data/2025/2504_10xxx/2504.10514/images/01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg b/data/2025/2504_10xxx/2504.10514/images/01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4c4c04ec16d3abc53e9ea83be3bea789a5b520e8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c7f09644004e353d5e4bba1353e61efe9ed86a70d56799032bd9b1d06130bb3 +size 3052 diff --git a/data/2025/2504_10xxx/2504.10514/images/02f9a5ca0b385b537a0fcb5b31aec27978d46e340a9943aae0e3b963a4a2fd0c.jpg b/data/2025/2504_10xxx/2504.10514/images/02f9a5ca0b385b537a0fcb5b31aec27978d46e340a9943aae0e3b963a4a2fd0c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9be7dbb46679f39c04c883bf1dd1783cf54a405b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/02f9a5ca0b385b537a0fcb5b31aec27978d46e340a9943aae0e3b963a4a2fd0c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23abf6b243741ea00b3f68d7db1ce8dddfb208e92d1307d9556b82b5a5fabe93 +size 46182 diff --git a/data/2025/2504_10xxx/2504.10514/images/044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg b/data/2025/2504_10xxx/2504.10514/images/044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5dc4bbaf4f2e22cc0b6a046260211f5514eb115c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2a573e13aca251f5d309226b0ec05ca819efdc03c5b9f109f16f5c17bfd9083d +size 4912 diff --git a/data/2025/2504_10xxx/2504.10514/images/04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg b/data/2025/2504_10xxx/2504.10514/images/04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d4c2024a991a6778b1a7caf5f4bc7784972f71df --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54193b4a6f0e3b42b5a7349439d64e2a3ac112372f003604f241ac330e78d77e +size 26573 diff --git a/data/2025/2504_10xxx/2504.10514/images/06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg b/data/2025/2504_10xxx/2504.10514/images/06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f65dce81113efd6cc67df0378271b54edb6d152c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:23e4ff6731af11fad6dafd028db101b68d2ccd7c6a8ec1232107fb82b5a3eb11 +size 10645 diff --git a/data/2025/2504_10xxx/2504.10514/images/08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg b/data/2025/2504_10xxx/2504.10514/images/08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d62444c665d56c844631837635f25b38048c21a7 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4eb0eba317a6f6368f2b2bad99b7c71eea459aa1c6fff6ced32d6446f1327593 +size 3039 diff --git a/data/2025/2504_10xxx/2504.10514/images/0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg b/data/2025/2504_10xxx/2504.10514/images/0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dca8e1aff7ec20fc05e17f28c667843e24bb7bdb --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b4144c8baaf849327860d024558f3399c29d11d6244638aeb6e84b9c307a19f +size 10712 diff --git a/data/2025/2504_10xxx/2504.10514/images/096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg b/data/2025/2504_10xxx/2504.10514/images/096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e3f2666eca2b08541dc004d81419fb992accaea0 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70493c16bc831428c589b87b9cf437d544278aec7b3de57ab3a1c738ba527761 +size 3604 diff --git a/data/2025/2504_10xxx/2504.10514/images/0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg b/data/2025/2504_10xxx/2504.10514/images/0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..bd62d9d2919e1e938e0309cd9c06a5d46d119190 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b1061b7812b360506d8a2cfb8c9ced02b67ba48c181007bd795eb386d729161 +size 9670 diff --git a/data/2025/2504_10xxx/2504.10514/images/0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg b/data/2025/2504_10xxx/2504.10514/images/0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg new file mode 100644 index 0000000000000000000000000000000000000000..609dfd81988e899abecc0706e12bef86bf6630ea --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:679d3251bf117f51cd0cfd960b7ea916e0f98ca81300037230e24ce5057c4c01 +size 7159 diff --git a/data/2025/2504_10xxx/2504.10514/images/0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg b/data/2025/2504_10xxx/2504.10514/images/0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e63fb563834f8511ff8b026fd97900b5cabed063 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7d29cd9b89117028748e6ef4a9115c4c5682a525bb419f0ccde265f05f43e1d +size 4480 diff --git a/data/2025/2504_10xxx/2504.10514/images/0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg b/data/2025/2504_10xxx/2504.10514/images/0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg new file mode 100644 index 0000000000000000000000000000000000000000..befa6a7a913a8d3fcda1614c847a17657a5cb05f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10078f6a4bbf2d497082f4603a6e1dbab4f6fbe3bea6576bf1323d54c95b8955 +size 4540 diff --git a/data/2025/2504_10xxx/2504.10514/images/10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg b/data/2025/2504_10xxx/2504.10514/images/10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9575c122b3e1bc1e132add654f9224ef6a36cd86 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:55fcf2489f78e27ae54374a61fb5d0f5a205562053b0943a2aadb2c961dd2204 +size 3001 diff --git a/data/2025/2504_10xxx/2504.10514/images/1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg b/data/2025/2504_10xxx/2504.10514/images/1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0947a93167552ad63474bcb9b8b0ef0ce3adcb20 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bef59eb6f7030b8f2fc3663ca9b79617799cf22614c5b9eef88ea8b34984d2d2 +size 218050 diff --git a/data/2025/2504_10xxx/2504.10514/images/13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg b/data/2025/2504_10xxx/2504.10514/images/13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..06a844de48ad0fd0e9b862180cf4a44d30a149a3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:faa8598b572067a48965fc202a44a902e58ae99473220c604343880f150f828d +size 4308 diff --git a/data/2025/2504_10xxx/2504.10514/images/15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg b/data/2025/2504_10xxx/2504.10514/images/15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg new file mode 100644 index 0000000000000000000000000000000000000000..234481eb6756ea0e846812c7e0565b62b1d3137d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af75846dd8b7fc1f0f26e4bfec5fcd6cd1d9178b373a7415b9d38bd5f7c92f46 +size 4869 diff --git a/data/2025/2504_10xxx/2504.10514/images/15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg b/data/2025/2504_10xxx/2504.10514/images/15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4ef06ea51d7425b347bad92b65c082c0ba56ed77 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e69c237ee9ba9414fb7b22c892c03faf6ee61a4e03d6cf30660fd17194722fcf +size 22243 diff --git a/data/2025/2504_10xxx/2504.10514/images/16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg b/data/2025/2504_10xxx/2504.10514/images/16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a7d7d347acd2bf2d7a95f6112e0022e97e77e239 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e42dd80a54345469129dc7be4f8f0be6d40d49a532cca6c832d7bacaabe41f1b +size 24923 diff --git a/data/2025/2504_10xxx/2504.10514/images/187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg b/data/2025/2504_10xxx/2504.10514/images/187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3a3bde21bcad58876ba8c08e13c28a38549c0114 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d160f6905675f27283cc6d5bc6fd80097d4d281921ce645ac526c24491c7d162 +size 4756 diff --git a/data/2025/2504_10xxx/2504.10514/images/18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg b/data/2025/2504_10xxx/2504.10514/images/18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..17bfe2cdb6b70b48c3c4064f999a92ce67b6d0c6 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa11452f75b8e403c63207c66ecdc7e6ee9496e5f154e87adf9d67121012fff8 +size 4500 diff --git a/data/2025/2504_10xxx/2504.10514/images/1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg b/data/2025/2504_10xxx/2504.10514/images/1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1cacbfc347992898947aa4914c901d0f71dbb491 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5db6ccfd42c9bbe7461d256caf6cb6aded8b952f266ba6f21c51e51b7baf5246 +size 4472 diff --git a/data/2025/2504_10xxx/2504.10514/images/198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg b/data/2025/2504_10xxx/2504.10514/images/198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a5a8490c189cb104ea80680c24b9d31658ad0766 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d45a44012b738f65ba70e467dfdb5c66f92a926b0e930ccab27c2103109af12 +size 3234 diff --git a/data/2025/2504_10xxx/2504.10514/images/1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg b/data/2025/2504_10xxx/2504.10514/images/1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f4746434558c2fa8e3709791dea0471adfecc632 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b34f10f5226df498e027f9e99b2db50ba79c02de5afc93772e17ade12452b892 +size 2087 diff --git a/data/2025/2504_10xxx/2504.10514/images/1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg b/data/2025/2504_10xxx/2504.10514/images/1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c91db2c48d3a9e3520663523dc4ff0664b6dee38 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5625903d43e33492713adbad46fc0cf6ceec53bee0d6147670f422dad2847fa6 +size 941 diff --git a/data/2025/2504_10xxx/2504.10514/images/2022338c089cc9168d1bd7a010104472b5f57dfa0b5f37a9ac9f001bc1edc912.jpg b/data/2025/2504_10xxx/2504.10514/images/2022338c089cc9168d1bd7a010104472b5f57dfa0b5f37a9ac9f001bc1edc912.jpg new file mode 100644 index 0000000000000000000000000000000000000000..aa1fdbfc7301c6db9b3fa441bb7db1dcfa9872eb --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2022338c089cc9168d1bd7a010104472b5f57dfa0b5f37a9ac9f001bc1edc912.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:353a050dc0846b43260c414e49247a2eefa93788cc4ca281ba036e4a170d9869 +size 38468 diff --git a/data/2025/2504_10xxx/2504.10514/images/25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg b/data/2025/2504_10xxx/2504.10514/images/25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..724c3628cc61e8263abc392169e3ee24f33ad419 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b1776764ce910b3e79dc39d9219eef26dfc9d9ba215482c8dcd45959d2f7f9f +size 22496 diff --git a/data/2025/2504_10xxx/2504.10514/images/2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg b/data/2025/2504_10xxx/2504.10514/images/2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..19aa820b984c379124df74e4b81090f4be98f024 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7efe4a7549eda68e0528d03cc260baf9161cfdbd68b248abf82e77ba699015bf +size 6734 diff --git a/data/2025/2504_10xxx/2504.10514/images/2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg b/data/2025/2504_10xxx/2504.10514/images/2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b5dad32f4227800a0b9acb6267fc46c75eb6774d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e2aa64e0992df3a71b23c7fa6da055f85080d295fd22cccd1c76dfe69d5bfa54 +size 20524 diff --git a/data/2025/2504_10xxx/2504.10514/images/2dd1bfc5751632f7ce11efe4e26cf20e287a4f3b05c3a4b28555ebfedf64c283.jpg b/data/2025/2504_10xxx/2504.10514/images/2dd1bfc5751632f7ce11efe4e26cf20e287a4f3b05c3a4b28555ebfedf64c283.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6c2bb2a3dd159232b18edbd0aaed4d00c73b15a4 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2dd1bfc5751632f7ce11efe4e26cf20e287a4f3b05c3a4b28555ebfedf64c283.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9052629f04e56dd4d5e704f3e4d1d3309789a376e01ba6ab4d5388eb236e03f9 +size 44539 diff --git a/data/2025/2504_10xxx/2504.10514/images/2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg b/data/2025/2504_10xxx/2504.10514/images/2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0f55344128e70aac0be63ffc164905ac4c73abe1 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67261da5bfaeb30f8bf197bdc67748eba14472b62674a9d6648446d337a7a71e +size 1261 diff --git a/data/2025/2504_10xxx/2504.10514/images/2fb6cc2e270b95a95b8c9a9c926d3138f9663a31c52a053bf3bcde3d8f8a1c81.jpg b/data/2025/2504_10xxx/2504.10514/images/2fb6cc2e270b95a95b8c9a9c926d3138f9663a31c52a053bf3bcde3d8f8a1c81.jpg new file mode 100644 index 0000000000000000000000000000000000000000..80a62cc3960e680e58d1073a64a207812f9eaf1a --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/2fb6cc2e270b95a95b8c9a9c926d3138f9663a31c52a053bf3bcde3d8f8a1c81.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05094cba5bc4ebb05d922bf002157fae53315bf899ee60a4d4f32ba6de80ca2d +size 35387 diff --git a/data/2025/2504_10xxx/2504.10514/images/32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg b/data/2025/2504_10xxx/2504.10514/images/32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg new file mode 100644 index 0000000000000000000000000000000000000000..01e38fb8457e00be237020c10f1e6fe576d4df23 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e62e9736092c39265c61a23bfc88378c13b0a5ca59ae93b50e69cab7cc9a1d44 +size 10943 diff --git a/data/2025/2504_10xxx/2504.10514/images/3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg b/data/2025/2504_10xxx/2504.10514/images/3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c3928384ec80300e21c5806a632f311a0c1147f8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0055755e20b0d489c9bffe7ea1829c0df6740e542072976e216504b5f5592405 +size 3083 diff --git a/data/2025/2504_10xxx/2504.10514/images/3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg b/data/2025/2504_10xxx/2504.10514/images/3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..80677e59a10e8869032bb94ce8789de0e23f446f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cd058f4c6da24712053aad40c8eea197d28d920be341b86cb9ff6557692ac9b +size 60844 diff --git a/data/2025/2504_10xxx/2504.10514/images/3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg b/data/2025/2504_10xxx/2504.10514/images/3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c28135bd7da7b3212a7287dcf2bc2972158b617f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f632797d47cb7a881dc764adcda2b6334571aab92f2277a89676bc1d96f4b4b +size 6613 diff --git a/data/2025/2504_10xxx/2504.10514/images/3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg b/data/2025/2504_10xxx/2504.10514/images/3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3c7167381e1ff2393c3771a708a0267a13616daa --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffb48a23eeb373e62e5820a2d920fe1d92db95d2d85a7722982438078a2c6cf6 +size 19486 diff --git a/data/2025/2504_10xxx/2504.10514/images/3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg b/data/2025/2504_10xxx/2504.10514/images/3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cd706abbe6c9acc88145e5f6a47075c0833ed1c2 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9c7b6a7dec270127eb80b12b1f341bfc674f2ad6854cb407cbd9302f40d4396 +size 9703 diff --git a/data/2025/2504_10xxx/2504.10514/images/3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg b/data/2025/2504_10xxx/2504.10514/images/3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5b00041dd50e86f8d92f15ef147d63f8c876e665 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47b7687a96e5e343a88022856bb02c4c8044ad9c794a361f92be7127c1af385d +size 3479 diff --git a/data/2025/2504_10xxx/2504.10514/images/3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg b/data/2025/2504_10xxx/2504.10514/images/3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9e467705c94c962d151b09b6b510743a474654a --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c53221e4a051523859780cd2c62d08facfb2ca7243ba08641784b0855435b644 +size 1991 diff --git a/data/2025/2504_10xxx/2504.10514/images/3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg b/data/2025/2504_10xxx/2504.10514/images/3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fc2d051deca051e5635b1c4a424e2a96521568cc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c066dfb7fe7a9a82c33bd74e8e696a5e15dd30c3f0a6778b99f5631216ca4d5 +size 5889 diff --git a/data/2025/2504_10xxx/2504.10514/images/3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg b/data/2025/2504_10xxx/2504.10514/images/3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ddd3ccc257abb1c07096c5c3ea1ddfa64fd67244 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1ae104d6507bee0f3802a3dd8f4e7eb258578f4d8ce336fae33fbbe58232cff +size 21736 diff --git a/data/2025/2504_10xxx/2504.10514/images/404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg b/data/2025/2504_10xxx/2504.10514/images/404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg new file mode 100644 index 0000000000000000000000000000000000000000..02bdbb3aa0e5f7a12c95f072b2e1eaa3b9b6656b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:683eee5ad796ac428e1c0a21bf23dadd63125d7a14e7f11a6ec29e3656cad55d +size 6957 diff --git a/data/2025/2504_10xxx/2504.10514/images/4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg b/data/2025/2504_10xxx/2504.10514/images/4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..009f5afad8423325481fc8953327147f2612a331 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b275c340aaa10daa800edf05d6b3bda264bf3376bfee22b2227c8e8e4c93aa3c +size 3359 diff --git a/data/2025/2504_10xxx/2504.10514/images/413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg b/data/2025/2504_10xxx/2504.10514/images/413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg new file mode 100644 index 0000000000000000000000000000000000000000..851f43782be0d5770519838bc72ec65665ee5390 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:893bea2edda3a2808da49e765b0420f61f3e9e75927ceaefd05ab1c710e5809c +size 3243 diff --git a/data/2025/2504_10xxx/2504.10514/images/43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg b/data/2025/2504_10xxx/2504.10514/images/43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg new file mode 100644 index 0000000000000000000000000000000000000000..37c69b2c73926c8039fef297ed7296dca54236bf --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78924933fcd4be77c477ab644a001be64e5a3d5c29aaa5e11fd84638f5147c41 +size 3917 diff --git a/data/2025/2504_10xxx/2504.10514/images/44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg b/data/2025/2504_10xxx/2504.10514/images/44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a84b253d5c690a82e80f220da46164a351a8a77c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfa3126734108132892a7b638fcff7e81c42f46ad2861cbfdd301b20a4002b5a +size 4180 diff --git a/data/2025/2504_10xxx/2504.10514/images/4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg b/data/2025/2504_10xxx/2504.10514/images/4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fe600497fc7a2d01b2d5f654c3cd508a2b311337 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e6a95c560e9bb38dfd5188ff33bfa6f61b8146cfa67372732090834d820acba +size 6078 diff --git a/data/2025/2504_10xxx/2504.10514/images/45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg b/data/2025/2504_10xxx/2504.10514/images/45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..32062e29a81bba5b97b9b24eaf4dc1e883f0aacc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e618bb198bf9987bbc15f554e84ab391c7fa4b9190d89437c48d751833e8527a +size 23104 diff --git a/data/2025/2504_10xxx/2504.10514/images/47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg b/data/2025/2504_10xxx/2504.10514/images/47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg new file mode 100644 index 0000000000000000000000000000000000000000..09638979aa98fcd3efadf24669498127e455b01d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7416beeffc1bd37646a4633a5fa47abc2892d74713a4622d32b5b0cadb6f6eb0 +size 7814 diff --git a/data/2025/2504_10xxx/2504.10514/images/4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg b/data/2025/2504_10xxx/2504.10514/images/4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg new file mode 100644 index 0000000000000000000000000000000000000000..be8a7e0fc9c87d26eb531ca07e8d472bab392aed --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:580daef9d32e96b780a66e68c6979e0ccb02e1eef258f60031aa7f81f62300df +size 10706 diff --git a/data/2025/2504_10xxx/2504.10514/images/4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg b/data/2025/2504_10xxx/2504.10514/images/4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4b1a672bc10171ae76ef07d797829ff447a454e9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:209eefe8d6ecb9cdda0583aabab733806b528fa6e890c2a379ecea0c600dc236 +size 4519 diff --git a/data/2025/2504_10xxx/2504.10514/images/4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg b/data/2025/2504_10xxx/2504.10514/images/4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..02f851113aa1c8c5cd3a321c0ecae7622eaa38fe --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebaec916cfbfc5a0d8882b23024d543137b7da41097d2eda57437ae8d78c9699 +size 22632 diff --git a/data/2025/2504_10xxx/2504.10514/images/4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg b/data/2025/2504_10xxx/2504.10514/images/4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..06f04b382e82918efd605a69f1143d4bf91d1cc6 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64deb27be0b6685b21c4d06cfd93f69fc29333a2f6533470cfa7e81d3ea66b3f +size 1636 diff --git a/data/2025/2504_10xxx/2504.10514/images/4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg b/data/2025/2504_10xxx/2504.10514/images/4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg new file mode 100644 index 0000000000000000000000000000000000000000..41c4d06ea881459cd1d4545f219b7798e71a079a --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f395debc85203a3be9d097fbe12c456f5007620fa7035ce3622e184d14ed24c1 +size 26948 diff --git a/data/2025/2504_10xxx/2504.10514/images/4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg b/data/2025/2504_10xxx/2504.10514/images/4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dc0b0633456d3b2217b869ef2ceeee6d5f5103da --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58ff9d32ad5b6026b411e7c07ad38fc31ef248ce44edde8060627346f909f918 +size 4824 diff --git a/data/2025/2504_10xxx/2504.10514/images/502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg b/data/2025/2504_10xxx/2504.10514/images/502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3e6c5536e371e74e8dea3c7599f51d84f6778563 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8b7049dfa53ac5e72037e2f315ebe0d74c9e275763d2be679831b3e80bf7180 +size 2988 diff --git a/data/2025/2504_10xxx/2504.10514/images/50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg b/data/2025/2504_10xxx/2504.10514/images/50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c1e5dbf307809b22751e15d468f06f2f4b9a2722 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d45b32db9d994b8f196ec79976579357e3aa8dc35c0711667c2e07a20d3a5a52 +size 4673 diff --git a/data/2025/2504_10xxx/2504.10514/images/5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg b/data/2025/2504_10xxx/2504.10514/images/5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..38bfc6797447fcad133a85fe454e8812b1ff697d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eec1b4ba4bf8c6fb6cf1a1c86e031604330b2261344c6e91e2029204cd0ad478 +size 44280 diff --git a/data/2025/2504_10xxx/2504.10514/images/5170edb4da81e1095363d9d239e153782c4a4ddd277014be36ab7a1d76040d6a.jpg b/data/2025/2504_10xxx/2504.10514/images/5170edb4da81e1095363d9d239e153782c4a4ddd277014be36ab7a1d76040d6a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1df5e1dcb0274c1a40bcd0fd57fb8a3c14be2456 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5170edb4da81e1095363d9d239e153782c4a4ddd277014be36ab7a1d76040d6a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a09a5abfacf138d106218a25557625a9eb0cda830e3cce961f7b4a9e38323932 +size 14575 diff --git a/data/2025/2504_10xxx/2504.10514/images/53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg b/data/2025/2504_10xxx/2504.10514/images/53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4715e22f0055d44883209a2555ab791c2c3b412f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81288c9435611776cf5eff79bd4c4458bf817ed99ec0b5f3df313031b2da92d9 +size 21692 diff --git a/data/2025/2504_10xxx/2504.10514/images/53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg b/data/2025/2504_10xxx/2504.10514/images/53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c0dadb700925a85be8f790b5f1d04cb2799c2440 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfc6f61e99d4c829e1bd35771ee4b13be1ef872d4f4b276e1b4a0236ec1fc59b +size 4847 diff --git a/data/2025/2504_10xxx/2504.10514/images/55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg b/data/2025/2504_10xxx/2504.10514/images/55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2075750d5957780eb5b38d41a3117cd7840557cf --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6d9d98c902f40c302bb8c1d2d1cfdc6531fdd4e35ca92f0a38b436e8c6b6f266 +size 22333 diff --git a/data/2025/2504_10xxx/2504.10514/images/5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg b/data/2025/2504_10xxx/2504.10514/images/5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f1cf6ce24d50cc634ffaf1b07c3ab0cc36a1e389 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5544661e7eaf7b0c3a8f3a81505d23599225ad3a5dce1d907e760f3af6b682d9 +size 4592 diff --git a/data/2025/2504_10xxx/2504.10514/images/585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg b/data/2025/2504_10xxx/2504.10514/images/585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg new file mode 100644 index 0000000000000000000000000000000000000000..630a7729433e50ed5a110bc7853fa66e04defe1b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0945d78c72e2523d53eeb87c9213209518aba73398ec6f1c79523847cc1c9d80 +size 4288 diff --git a/data/2025/2504_10xxx/2504.10514/images/598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg b/data/2025/2504_10xxx/2504.10514/images/598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b0c6d46ceb1fd8a66c6471216e4332c6c4a4dede --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad231ae272501adf79298a085ca21b45d0c213bab28fde04f70eba1ef1fd2e5a +size 5709 diff --git a/data/2025/2504_10xxx/2504.10514/images/59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg b/data/2025/2504_10xxx/2504.10514/images/59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg new file mode 100644 index 0000000000000000000000000000000000000000..957d823fe1c7420b8e5d3fb3962d09105f07e758 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54b801af8316b0af5b66a7d3f007d9193ce310430295ec916f6c3f396475e622 +size 3734 diff --git a/data/2025/2504_10xxx/2504.10514/images/5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg b/data/2025/2504_10xxx/2504.10514/images/5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c289bbdedfb6388fc52e28b0c06e10604464d880 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e54bd09096e799516766cad2b7c776eb2b407e76f934377688b7a3b52cfc8f0 +size 23972 diff --git a/data/2025/2504_10xxx/2504.10514/images/5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg b/data/2025/2504_10xxx/2504.10514/images/5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e567218a92d60807a279a18f0379198cf38fabd0 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77066c5b7de4093279807a3d8bf9889438be8a0138aedff49b13e352521d1de4 +size 26231 diff --git a/data/2025/2504_10xxx/2504.10514/images/5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg b/data/2025/2504_10xxx/2504.10514/images/5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9e2e7c79b2b8b32de61f77733f1fac8b782c6e08 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d19b4e3017ff855f2b77a8e559effd0e847466dee08893a361e49e6c5868b89 +size 7934 diff --git a/data/2025/2504_10xxx/2504.10514/images/5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg b/data/2025/2504_10xxx/2504.10514/images/5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg new file mode 100644 index 0000000000000000000000000000000000000000..95522ddb900039bcb22a5893409d77d3342b5eac --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:69e4f3a42aee47b552f790dfd4c6f36fb505b3fd6ca472882be65904f704684f +size 7606 diff --git a/data/2025/2504_10xxx/2504.10514/images/5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg b/data/2025/2504_10xxx/2504.10514/images/5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9e1ca849ee8af317384adf79a693ce59374d1569 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5469bafcc4610c7170d7fff6747a7c631f7277a605a496f81e887dedc0a2f1ef +size 3075 diff --git a/data/2025/2504_10xxx/2504.10514/images/5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg b/data/2025/2504_10xxx/2504.10514/images/5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg new file mode 100644 index 0000000000000000000000000000000000000000..95fa1a10a6a774ba8805a677a909e0c7c9aa3ba6 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6a0c5b46fe99a38e995450bd1d4aa0fbb3245908b365caae98627837948ad87b +size 6927 diff --git a/data/2025/2504_10xxx/2504.10514/images/61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg b/data/2025/2504_10xxx/2504.10514/images/61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg new file mode 100644 index 0000000000000000000000000000000000000000..13fa1b56780605672b3ac48a5cf1da0a9ac1e9f2 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:954086fa72a02bb1ca717ce48ef5144224dcfbefb7d346b0c213cd9622b7e9da +size 27071 diff --git a/data/2025/2504_10xxx/2504.10514/images/62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg b/data/2025/2504_10xxx/2504.10514/images/62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg new file mode 100644 index 0000000000000000000000000000000000000000..03016d67ba1f866ad29869edb8d914552909e376 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1af054836f85c0316e5eba542847f704d3d38aae654ea9a916043bdcd6fb8ca6 +size 78672 diff --git a/data/2025/2504_10xxx/2504.10514/images/6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg b/data/2025/2504_10xxx/2504.10514/images/6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a7d5ca329bc07e54cb87eac28c92490049f76a38 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7cb7090837e1a9872b30fcf1ca4e5521f59c252148cd985e88b510bb385ae3eb +size 22140 diff --git a/data/2025/2504_10xxx/2504.10514/images/657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg b/data/2025/2504_10xxx/2504.10514/images/657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6e95cb01b9b912e4ab324a823f4219f4e9b337cd --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4919c0f7bffe453f12188ba44fa943567434ff5bee58cf6bf642988c8478054b +size 7763 diff --git a/data/2025/2504_10xxx/2504.10514/images/6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg b/data/2025/2504_10xxx/2504.10514/images/6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f875fe32c9e2ccc93940001b68ae1365a367dda0 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:129a22c94268fd78067e827e5163d3ae394f93b69ff4c3d43111a00fa742ead3 +size 4805 diff --git a/data/2025/2504_10xxx/2504.10514/images/6696a3e56dcd41106cc9520c97ca6ef997d92e3da4928d10da388f6eb66d04e7.jpg b/data/2025/2504_10xxx/2504.10514/images/6696a3e56dcd41106cc9520c97ca6ef997d92e3da4928d10da388f6eb66d04e7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..634cd7f2934d0972de27421759a0fd01d57a919b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6696a3e56dcd41106cc9520c97ca6ef997d92e3da4928d10da388f6eb66d04e7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e620da62a4dd07161635d3e3541ea55992ef8969d22245bf0b3f9f91a265415c +size 52288 diff --git a/data/2025/2504_10xxx/2504.10514/images/68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg b/data/2025/2504_10xxx/2504.10514/images/68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ca0f351dcf1930a384e488a92f39efeb6e2244d1 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c7a341b8bb86305e602b744dbcc0fb6bcfe51ddaa62313ba2dea65054c4404bc +size 14704 diff --git a/data/2025/2504_10xxx/2504.10514/images/6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg b/data/2025/2504_10xxx/2504.10514/images/6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0bc9a23b1fb9ad651ba23b152fb8ceb6095c3b2c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:141ca7b9efef2caa7c3fb2fcc76d71dd3bf2859a7ace221b01ebb842303fa0a4 +size 1486 diff --git a/data/2025/2504_10xxx/2504.10514/images/6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg b/data/2025/2504_10xxx/2504.10514/images/6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ee1006dd566fde5c271c5c7b3ceeba6ecef204ac --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:416994d75cec1b73f0bb565dff664e763f2863d730e60993255b7de548578f78 +size 4334 diff --git a/data/2025/2504_10xxx/2504.10514/images/6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg b/data/2025/2504_10xxx/2504.10514/images/6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a267b525a86c290d095604c2464f7276af98b1a3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53509bda120655de1f484bd771a5acace42200e31750ce892ca39b7128f06b02 +size 4376 diff --git a/data/2025/2504_10xxx/2504.10514/images/6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg b/data/2025/2504_10xxx/2504.10514/images/6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0e450921918696810c6a6bf565db626787c5053d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:916bc122039cdf6624693c76383ffd05919ee8af1b270a8e3d2ad8cfaa3d9507 +size 3299 diff --git a/data/2025/2504_10xxx/2504.10514/images/7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg b/data/2025/2504_10xxx/2504.10514/images/7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1a0e296e0332a45d4b655c7881a5d0089d4ae13f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c797fd111866e74de8413421f98c3bbd17cfd7e66701790b6f1e0ab67482b3a8 +size 22267 diff --git a/data/2025/2504_10xxx/2504.10514/images/72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg b/data/2025/2504_10xxx/2504.10514/images/72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3d4b9ec49b68f86f24b8dc3fe6c92de6c243d983 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33335605a02bf36a8ccdcf93fe978c83e623f99cae51b40f87aed3e8a928775f +size 7621 diff --git a/data/2025/2504_10xxx/2504.10514/images/75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg b/data/2025/2504_10xxx/2504.10514/images/75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e520cd1b6f0b371a483ad8e5ccc451d2f27a1e6b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2fe730b42ffa4558b03d14a341bc5e989bdc5a6db0da0f1ed523e1c0b512470b +size 7094 diff --git a/data/2025/2504_10xxx/2504.10514/images/77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg b/data/2025/2504_10xxx/2504.10514/images/77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9ca13cf4d01c1de34ed5842b12b00bc57bce39d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:323c2c16a75feb72bb5306692c99c6ea48996c641d42adb9c598c6a82a11b0a8 +size 7519 diff --git a/data/2025/2504_10xxx/2504.10514/images/77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg b/data/2025/2504_10xxx/2504.10514/images/77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a455c720e942e4f8cb2423740b2069cd7768b274 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cada8f23624abc0be0ca6993f0416610f84c1c97bc332c9d8c069b0182a4d7bf +size 3370 diff --git a/data/2025/2504_10xxx/2504.10514/images/7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg b/data/2025/2504_10xxx/2504.10514/images/7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg new file mode 100644 index 0000000000000000000000000000000000000000..79dabe3117daf0c522f7617ba827460ba1e52003 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e691984f7ab1754e04b8fa65fc61c90df62319c71b200ca1fe02327e8bd2211 +size 22029 diff --git a/data/2025/2504_10xxx/2504.10514/images/7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg b/data/2025/2504_10xxx/2504.10514/images/7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..649210f409823d49329bd1d4320b08fb8d3bc948 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b222d5ae125d82b2bc1e7cac2c78e2516e951cc4158a94645b663c00b9389bdb +size 24200 diff --git a/data/2025/2504_10xxx/2504.10514/images/8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg b/data/2025/2504_10xxx/2504.10514/images/8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0afcb4b79173b9f2faffa49257c55a6bf39a91b3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75bd920c13043bd76ba4a2153dd981183ddcde52d697e201f38b63dc46100038 +size 1602 diff --git a/data/2025/2504_10xxx/2504.10514/images/81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg b/data/2025/2504_10xxx/2504.10514/images/81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..58dd6ec3b4b484c71ab4d957668d63c0da34f368 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bf78d2b1c738c5c7914bdb2e5595559d8576193b047ba0fd73e454ab74b5cc6 +size 4805 diff --git a/data/2025/2504_10xxx/2504.10514/images/8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg b/data/2025/2504_10xxx/2504.10514/images/8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg new file mode 100644 index 0000000000000000000000000000000000000000..42e3f97f36da0ca16ddbe2bb683edfe25d3b618a --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8772334d2dbec4d6bd35b8a68f2ed503a0a221ce59a04e0cbe89bd9a26df16c9 +size 7199 diff --git a/data/2025/2504_10xxx/2504.10514/images/8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg b/data/2025/2504_10xxx/2504.10514/images/8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d15e617b2ed5e2f0ba89a1bbd8cda41687e5705 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41f6eaac5049600483750d30bbdfd368ff6c86b7fa03493c9b158a54f99d5757 +size 35852 diff --git a/data/2025/2504_10xxx/2504.10514/images/842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg b/data/2025/2504_10xxx/2504.10514/images/842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ae64b9140af405566f487e4d79fbfe3f40875d5b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b6792bc007c10f858ad88188968421902040062fe0b672c001807e417a6d34a7 +size 6801 diff --git a/data/2025/2504_10xxx/2504.10514/images/84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg b/data/2025/2504_10xxx/2504.10514/images/84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3c2d2ba2040f667ee1c774da7c15b54e0622b633 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:970b18131359529d8b0561f096728e97f936b2e3d42edd9022091ebd7e1eb3ba +size 3203 diff --git a/data/2025/2504_10xxx/2504.10514/images/84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg b/data/2025/2504_10xxx/2504.10514/images/84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..2af3077f2c13f14460a4f385ff32178f3c2b7efb --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4cb4f6ca12fe17bdfd1d757a532e6c58b98c6371c531cf4362c9d56ecc77b3cd +size 21572 diff --git a/data/2025/2504_10xxx/2504.10514/images/847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg b/data/2025/2504_10xxx/2504.10514/images/847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1e2f69b393918c910dbc0bcf603b8161195e1f48 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7132ce1055cfcb191d82b44e3c5450a5b30e38bdb8e24321844c65af5d1607ff +size 6974 diff --git a/data/2025/2504_10xxx/2504.10514/images/84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg b/data/2025/2504_10xxx/2504.10514/images/84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg new file mode 100644 index 0000000000000000000000000000000000000000..270993106f9370605926f7e97222181029719aa7 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43f1162aee0c2a42e7d4c139cd97d1bf46d96b89b536345a211d133e882c647b +size 7592 diff --git a/data/2025/2504_10xxx/2504.10514/images/877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg b/data/2025/2504_10xxx/2504.10514/images/877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1fbb3e5efae03ec0d822e6dac07903789e78c8b6 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2e7956ef635fa84e39813f4616d6ad07c563708e65fb5d2ab05b45bea871687d +size 4563 diff --git a/data/2025/2504_10xxx/2504.10514/images/88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg b/data/2025/2504_10xxx/2504.10514/images/88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg new file mode 100644 index 0000000000000000000000000000000000000000..006933710f1fce155b9d4197cea2044b7afbb55e --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:775f8cd64600ce0a44421629d90f2d3cfe929be3d28e16bdc5f5e7cc3a301606 +size 4783 diff --git a/data/2025/2504_10xxx/2504.10514/images/8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg b/data/2025/2504_10xxx/2504.10514/images/8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3445900fa07ca47d8542bd372dd57b440d405a51 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:231cf0be5c65bdf90c9b112c4e0cfb35c6e345d91004706584cca0f0fd3211ce +size 7899 diff --git a/data/2025/2504_10xxx/2504.10514/images/9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg b/data/2025/2504_10xxx/2504.10514/images/9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fa25fc938df1a7a20307c1e03b294a29990bcb12 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb1190a4a3c498c97f3a8f085fb51cdfb417013e90ae702e95dae75578f1fe51 +size 6828 diff --git a/data/2025/2504_10xxx/2504.10514/images/93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg b/data/2025/2504_10xxx/2504.10514/images/93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b5167dc74f292a7237b338210b703696e45c356b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:996d4d7ec4692bd00b6666bf1d59385d332ded19760f3ce9dd5dbfbe46e7cc31 +size 7977 diff --git a/data/2025/2504_10xxx/2504.10514/images/963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg b/data/2025/2504_10xxx/2504.10514/images/963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e3fa215e6e74ffc19736b6feb5c0f7f56bcbeb80 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b09b79cff8f50a903524762dc5938a44eea788829cc3ef424772ba97e7913423 +size 2073 diff --git a/data/2025/2504_10xxx/2504.10514/images/9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg b/data/2025/2504_10xxx/2504.10514/images/9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e4a9e9e07d7265894103b705fc67e1b1ae9e70d2 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0cc55c629238333220ce6060abd8ab3404d96fa4d70d418e5be941deddc66ac0 +size 2882 diff --git a/data/2025/2504_10xxx/2504.10514/images/971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg b/data/2025/2504_10xxx/2504.10514/images/971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f288257d3ab85e8fdcb1d7cd9211748c52a002c4 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3faaf8a0bf0fdeae5e09395f6cd4bf3e30cf606f822b553e5fa1da9eab1e79af +size 7600 diff --git a/data/2025/2504_10xxx/2504.10514/images/9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg b/data/2025/2504_10xxx/2504.10514/images/9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b065200801774f5d5c3b21b124ab10eed61d38f9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1022da443def95fad6f5c5e5287bc46506ae09c86dfe5b36350b31deffe16672 +size 18689 diff --git a/data/2025/2504_10xxx/2504.10514/images/98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg b/data/2025/2504_10xxx/2504.10514/images/98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fee16f253d5a4698b65ceb8ffc1d6df0de8ea971 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4e94c4ef7adcfcb7418dad59e0a9ab4d43f5ba51869429539eddeaa48dcfb29e +size 6436 diff --git a/data/2025/2504_10xxx/2504.10514/images/998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg b/data/2025/2504_10xxx/2504.10514/images/998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b81df0b6d2318f89f447c61e2e3a38b9a4aef0a9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74669d03d97639e946844c5b52b626b60a4583386f461b06359d0cc40e06b07b +size 4561 diff --git a/data/2025/2504_10xxx/2504.10514/images/9b96657fefa1d52defb48a32a8eb92da5620c7813c002852c292ef28b297a613.jpg b/data/2025/2504_10xxx/2504.10514/images/9b96657fefa1d52defb48a32a8eb92da5620c7813c002852c292ef28b297a613.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0515bfd0e0bd8d30ce670d73ac07b88a56d7a0fb --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/9b96657fefa1d52defb48a32a8eb92da5620c7813c002852c292ef28b297a613.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75f8579c33d44877592e6c1df72d18fb06359e89b4e5cad139d294525d8fd062 +size 73133 diff --git a/data/2025/2504_10xxx/2504.10514/images/9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg b/data/2025/2504_10xxx/2504.10514/images/9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6ffaf9b303412ad26aa71aeff51b65f7c27ae484 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9c17ba5fbbd8d35333b1a6b41a81c81226e4d0245f3e76a1b9fd57be2ba455b6 +size 36291 diff --git a/data/2025/2504_10xxx/2504.10514/images/a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg b/data/2025/2504_10xxx/2504.10514/images/a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ac16dfb54289807611f5f3bb1452b4b42d80dcdd --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fb48e7737c9fc9b14f61fd61e1de2a50c4e9d87dfabc07b54b670f3866ac8ea +size 2026 diff --git a/data/2025/2504_10xxx/2504.10514/images/a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg b/data/2025/2504_10xxx/2504.10514/images/a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3de26ae1204f86f03b50d492162160bf39b43827 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b940bc1a128ee23c1b4dda5eb9e755af945950ddb950b02ecff1c4e2563ae91a +size 3280 diff --git a/data/2025/2504_10xxx/2504.10514/images/a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg b/data/2025/2504_10xxx/2504.10514/images/a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..395689b27e6c0ea0f033f65bd7c6f697566277a9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf9971689f22a097d93a7d1f413ee73e0eb5d47870b2a16755d3dea2b3646135 +size 4382 diff --git a/data/2025/2504_10xxx/2504.10514/images/a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg b/data/2025/2504_10xxx/2504.10514/images/a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1ffc45183a5d737cf45ed8ae74d667124f044854 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e96c2fcaa8712f2356f7770c8171966bb963f65a42136df7e5f94c4bc165aa47 +size 24231 diff --git a/data/2025/2504_10xxx/2504.10514/images/a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg b/data/2025/2504_10xxx/2504.10514/images/a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..67cf741def804c382cdd9daa9c49dd7891a7d201 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a018a6699ebfce5047827c47832d18c00dac2a8b0afac0cff08ea529e33bac33 +size 4109 diff --git a/data/2025/2504_10xxx/2504.10514/images/a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg b/data/2025/2504_10xxx/2504.10514/images/a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4eefa54a5533a025ea6eed88d769c706faa9c7b3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f4d1e92cb0e86db01abdad6e94213c0d1460d60ef7dbe96875769782545914f +size 3601 diff --git a/data/2025/2504_10xxx/2504.10514/images/a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg b/data/2025/2504_10xxx/2504.10514/images/a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9da58bf841abfdd987b48b7d85254741b8addf25 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a887c1dbd830c5d182e6c9cb02b26b3dbaa4e2f7b9fb84d9a790b1795634f43f +size 3278 diff --git a/data/2025/2504_10xxx/2504.10514/images/a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg b/data/2025/2504_10xxx/2504.10514/images/a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..67c825d7b1b89448d98a4321dee878ea0ab730dc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:05f2d4f03221def0eb05ea7069884d19711c36dd52b56697f364fedf3ae79bed +size 21452 diff --git a/data/2025/2504_10xxx/2504.10514/images/ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg b/data/2025/2504_10xxx/2504.10514/images/ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..afd25941d1398bda67b3dde830d31ec3141a50da --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baab57c17dc6113ea24d972abcfa42ebd5bb0cfab29ff5ad4491400e1e4ddc32 +size 22266 diff --git a/data/2025/2504_10xxx/2504.10514/images/abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg b/data/2025/2504_10xxx/2504.10514/images/abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg new file mode 100644 index 0000000000000000000000000000000000000000..68ada6f9610af9a91542ba41e2eb0272dbe49a75 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c3e71a795b370a4e37bcee77c89f4a8f35224ad4f26f6eae4424f41ff7a81c2 +size 4215 diff --git a/data/2025/2504_10xxx/2504.10514/images/ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg b/data/2025/2504_10xxx/2504.10514/images/ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6632ac292ac93f21034dcf4eafe430aa37e5a950 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6fc3d499e854289ddfda24e2e78be25396493b4ec813fe65127308bd545fc54 +size 19600 diff --git a/data/2025/2504_10xxx/2504.10514/images/add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg b/data/2025/2504_10xxx/2504.10514/images/add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4239bd636e21b6c1c2affa37148eae8dbcbef962 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f51abaabc3dda6f27450e0a284b2f3f29dca41819f4d3b2a21e83c238335c2dd +size 3823 diff --git a/data/2025/2504_10xxx/2504.10514/images/aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg b/data/2025/2504_10xxx/2504.10514/images/aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..60269de617cbd08b1797184588c891dad15460ca --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09b33a88105edd165964fe5e8106506b8231c87b141efe9530e5bacfa34abfd7 +size 8808 diff --git a/data/2025/2504_10xxx/2504.10514/images/aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg b/data/2025/2504_10xxx/2504.10514/images/aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a5c9a5310c6f96ff19927b4c1516a77f1f6a6aa5 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:65c08294a02e948aba66d622f1d4a1dcc45821bdaab1cd8dfa8c2ec93715f940 +size 23923 diff --git a/data/2025/2504_10xxx/2504.10514/images/af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg b/data/2025/2504_10xxx/2504.10514/images/af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0a1da7548f83b74d2f7fa9aeb283206b9650316b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a99fc72eaac173d28d2f8b3f7ceab310d8007d0a55658d0f4c94ad2941076e6f +size 3671 diff --git a/data/2025/2504_10xxx/2504.10514/images/afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg b/data/2025/2504_10xxx/2504.10514/images/afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..614351dcf94110ac8a0df86c9fe4e502c10743fd --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:473112579886dadd09aa5f3eb9ace49e57de38a5fb1d8283e34ec8ae6d14268e +size 28274 diff --git a/data/2025/2504_10xxx/2504.10514/images/b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg b/data/2025/2504_10xxx/2504.10514/images/b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5d35252dfa80ce4e5f3e191a7247a6d0c74363b2 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:021cdac52b8db2608c5e3bb47ccd60899ce72f315e0649fe2063deadf0131753 +size 3982 diff --git a/data/2025/2504_10xxx/2504.10514/images/b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg b/data/2025/2504_10xxx/2504.10514/images/b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4cc7236397ecdb8aebac8feaa3f3f328ceefbf5c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f774f3fed8ce8c964b1687a02a2b39b0a902c83e1d4a49d692c4542bc6f07153 +size 25187 diff --git a/data/2025/2504_10xxx/2504.10514/images/b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg b/data/2025/2504_10xxx/2504.10514/images/b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..147e7c5c8f1448ef1227824f58b123296c1a9361 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f9edaa23172e4a0741d9d323b211264e62feb17e9d99e19f63bec0be6efa7a0 +size 28181 diff --git a/data/2025/2504_10xxx/2504.10514/images/b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg b/data/2025/2504_10xxx/2504.10514/images/b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg new file mode 100644 index 0000000000000000000000000000000000000000..40f16c9a19efbbb3847b8f3f895f94dd77384e65 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75dde5c5fc3f7963d429311ffa4cec237541b61581ff3c46f94740d392e7a610 +size 3201 diff --git a/data/2025/2504_10xxx/2504.10514/images/b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg b/data/2025/2504_10xxx/2504.10514/images/b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..61659de008c5c5bc51e7ed4ebfa5f6ff95827e42 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aef2e6691b3033d5b1009aa24291754f16353cbba59769499bde0d71405416ba +size 3454 diff --git a/data/2025/2504_10xxx/2504.10514/images/b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg b/data/2025/2504_10xxx/2504.10514/images/b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9e464cbaa6b7aeccaa0686e5caef0cb849fb15eb --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ae16906e2223faadc866fa72972275a9f0089c0b67ba78f6aac528dd18900d9 +size 4554 diff --git a/data/2025/2504_10xxx/2504.10514/images/b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg b/data/2025/2504_10xxx/2504.10514/images/b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg new file mode 100644 index 0000000000000000000000000000000000000000..512d0e531651964fde772f357b57bff3669b2af2 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b178f9e47ac7589e34d45f9e507ddebcf928058cf516550be74f03738b1ac05 +size 29738 diff --git a/data/2025/2504_10xxx/2504.10514/images/ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg b/data/2025/2504_10xxx/2504.10514/images/ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..59398b74aa9d04caf3a971ce5616a1583f2c25be --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:376a9e4c21f5e651e83fb50d5dc9ceb30f2b3b31fff2c7f7fd31776631bdd79d +size 3111 diff --git a/data/2025/2504_10xxx/2504.10514/images/bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg b/data/2025/2504_10xxx/2504.10514/images/bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f8a57b59f6a1b938e1370282ea5f38066fe27394 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:540e7a08e5130af26813b95b2e3174a92168850458593819d3e1d168f5fe5b35 +size 2782 diff --git a/data/2025/2504_10xxx/2504.10514/images/c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg b/data/2025/2504_10xxx/2504.10514/images/c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9c53bf844ec85cdb0e77298f4a1818274785d3bc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2839277ee17da7f6b4c18b60800044900910a9530f232ae90c5fb0cbd11344c +size 22878 diff --git a/data/2025/2504_10xxx/2504.10514/images/c37486eaabc8fc97ed4c652a07c5ed8f34be28cbd367fb740ec38f9e3701d520.jpg b/data/2025/2504_10xxx/2504.10514/images/c37486eaabc8fc97ed4c652a07c5ed8f34be28cbd367fb740ec38f9e3701d520.jpg new file mode 100644 index 0000000000000000000000000000000000000000..467194e2cd09bce8bcd3d54fbcda638c4d7f6437 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c37486eaabc8fc97ed4c652a07c5ed8f34be28cbd367fb740ec38f9e3701d520.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:436c3c789f63a1c221e96d43c9d3e84adf06f04c3ae0de02f3801915af988b5a +size 167937 diff --git a/data/2025/2504_10xxx/2504.10514/images/c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg b/data/2025/2504_10xxx/2504.10514/images/c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e64cd6ada285cdce94400585fa2cc25aa651aea9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:947b4fa2f31b8d5865daaa1dfe7b31d65bd19d6abe6e8396b95e7179b75e41ef +size 7997 diff --git a/data/2025/2504_10xxx/2504.10514/images/c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg b/data/2025/2504_10xxx/2504.10514/images/c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4b3ab196c01c85abdc3b9d509e75c2e5629e5276 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:85431775507cd0a2d881b818ebfcc77797a044b69134021f43c6bb9346d3da13 +size 8059 diff --git a/data/2025/2504_10xxx/2504.10514/images/c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg b/data/2025/2504_10xxx/2504.10514/images/c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..97b13e70e31b2d054092392765d084f652d263b8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9d0c0b049f09b1173f69934f19f0073ee9624fabeddc0d95a22b89db8b408c7 +size 2969 diff --git a/data/2025/2504_10xxx/2504.10514/images/c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg b/data/2025/2504_10xxx/2504.10514/images/c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f03154e8ae84c0d2c0923f3600d4583250fe2f2c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f34bbceae8377594bcce81c77b1bf7df79f3c633f0790a3388e88d1760ddaf94 +size 49311 diff --git a/data/2025/2504_10xxx/2504.10514/images/c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg b/data/2025/2504_10xxx/2504.10514/images/c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ac2d6eaeef7a7dde5dd60c554e023d7b4bf91e9d --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:942210e0d402b92d9617be4d8851d3f3bb579b899c591b2481886d56e0fb5ab1 +size 45980 diff --git a/data/2025/2504_10xxx/2504.10514/images/c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg b/data/2025/2504_10xxx/2504.10514/images/c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7d7a4eeffd9e78a65728a4017e413b4838ef073b --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb538329efa8b9b7594595ae74bcc9ed87ba47d04d5137469b3ab76df9a5b45b +size 6577 diff --git a/data/2025/2504_10xxx/2504.10514/images/c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg b/data/2025/2504_10xxx/2504.10514/images/c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9d5b99e66be796bda990e75940c6b79ffed2bc8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4da9e689aa436649ac3069831dc056c161e3ea327db5f1d39235b1efa3ef72bf +size 5618 diff --git a/data/2025/2504_10xxx/2504.10514/images/ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg b/data/2025/2504_10xxx/2504.10514/images/ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f613d36489606db250be210ae8fb782bcb74462e --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:648fd8d8eae4f882845233fd15e3d005c6d359a3582292dbd9cae45fc83ffc96 +size 4687 diff --git a/data/2025/2504_10xxx/2504.10514/images/cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg b/data/2025/2504_10xxx/2504.10514/images/cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..5278ad5e9badf2ad1c3b282f10893e2f620d8d38 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21880aa6442e8c789e9fb777b600385cd222bee1c385ab1b4e742f3f1cff58e3 +size 4834 diff --git a/data/2025/2504_10xxx/2504.10514/images/cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg b/data/2025/2504_10xxx/2504.10514/images/cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..01e8c7cf76700adbe10ac4642c5a691a6eecb180 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6534e5903ad5bceacaac4f087d56a3cbadaa8dac37b87fd26c2936d58261666 +size 7209 diff --git a/data/2025/2504_10xxx/2504.10514/images/cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg b/data/2025/2504_10xxx/2504.10514/images/cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dfbf5c4c7d38183c13cafb7cbcfd98c05b9659df --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56e8eb270f20742d0ac306399232577f0d43d266ba181a117b9c7f5e67d3a5ac +size 6396 diff --git a/data/2025/2504_10xxx/2504.10514/images/d12f3d56e223e8c4c4ffd1e4211bba0e65511b1dd1838ddb833ecb814d0e653a.jpg b/data/2025/2504_10xxx/2504.10514/images/d12f3d56e223e8c4c4ffd1e4211bba0e65511b1dd1838ddb833ecb814d0e653a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..acd8b54603e013844b60813a00586cb7b7b9a181 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d12f3d56e223e8c4c4ffd1e4211bba0e65511b1dd1838ddb833ecb814d0e653a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:573dbcb2d491355dff28ea5b3381a1aff48ee81dce648913eb88237a7e92de9d +size 8539 diff --git a/data/2025/2504_10xxx/2504.10514/images/d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg b/data/2025/2504_10xxx/2504.10514/images/d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a22c08185f440cef5b67cf2d3bc1d7f35489ffdd --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66610514b345ce713c27764dd89504e4a6236c6ec37ba6a9a381f64ee8e0ddf8 +size 3128 diff --git a/data/2025/2504_10xxx/2504.10514/images/d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg b/data/2025/2504_10xxx/2504.10514/images/d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d08bc081bac914b12bfc9b4433ee87c51c06ff1c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5047c2c72dc8bfb8a374d86f9ce7eca6e48d872fce4e11e7a3c6da0ff0cd6aa2 +size 3314 diff --git a/data/2025/2504_10xxx/2504.10514/images/d3159d13d3adc9a24ba185559b6a755ba073de0943e39df62692520911738dd4.jpg b/data/2025/2504_10xxx/2504.10514/images/d3159d13d3adc9a24ba185559b6a755ba073de0943e39df62692520911738dd4.jpg new file mode 100644 index 0000000000000000000000000000000000000000..27a26cedd3f3456de90ea237e1cb18a05a845f25 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d3159d13d3adc9a24ba185559b6a755ba073de0943e39df62692520911738dd4.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14ebdbf619553747fcf75268ebe59bd04b36015976659a016d923b548ddeaf3a +size 7451 diff --git a/data/2025/2504_10xxx/2504.10514/images/d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg b/data/2025/2504_10xxx/2504.10514/images/d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b4db9dd8e6ac3f48d24ffd31aa23ef658a1159d6 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:776029096e6431936b3e15f5051383e65b5784d37c98e75cae88f349a00c3a72 +size 4442 diff --git a/data/2025/2504_10xxx/2504.10514/images/d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg b/data/2025/2504_10xxx/2504.10514/images/d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..961805e5f2ef6bad09ad22d59092ecb51c9188e3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36c03df2d477d1539ae048d0cd1752c662611d4f6a72ce1b8ab544590fdcae5d +size 3291 diff --git a/data/2025/2504_10xxx/2504.10514/images/d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg b/data/2025/2504_10xxx/2504.10514/images/d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..53513e377e7f5e91e7ba83f326d123fddc4da856 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd309542980eb21032c90bf373f693fa5de4d0fc6ad2026b46088449e9741e59 +size 7185 diff --git a/data/2025/2504_10xxx/2504.10514/images/d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg b/data/2025/2504_10xxx/2504.10514/images/d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg new file mode 100644 index 0000000000000000000000000000000000000000..83732d665239a760510b3e0a0e4d2a40a5acf9c8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce45fe7b0df05545584395587b98333ad618824382a38c7a8222f371a321ffb2 +size 21992 diff --git a/data/2025/2504_10xxx/2504.10514/images/d60ff358df2811d8830a0caebeed2f35e40a50d32131cd91bafe0c4f1c943739.jpg b/data/2025/2504_10xxx/2504.10514/images/d60ff358df2811d8830a0caebeed2f35e40a50d32131cd91bafe0c4f1c943739.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8a9f55962c6c6196619d5bde7db9579a258295dc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d60ff358df2811d8830a0caebeed2f35e40a50d32131cd91bafe0c4f1c943739.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e4c05588562601b472f0b64eff867b6f39e31a8749e58aa659519d9adbc29b8b +size 122248 diff --git a/data/2025/2504_10xxx/2504.10514/images/d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg b/data/2025/2504_10xxx/2504.10514/images/d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..15c3e32c9e946eb9a91f51483f4a2f49a812e2d8 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9dd0c1cf5815e8243869bbe6cfdf3569335b1ef7141a22c0f2fb0c4242c2e000 +size 31252 diff --git a/data/2025/2504_10xxx/2504.10514/images/d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg b/data/2025/2504_10xxx/2504.10514/images/d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fbf9530a1d272cf32a1cc071a19228da9b2639cc --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0e50344e9e8e1d075a49cd597e0824ef321d5f64c3bf5b2c89325d3d5c3f353 +size 4812 diff --git a/data/2025/2504_10xxx/2504.10514/images/d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg b/data/2025/2504_10xxx/2504.10514/images/d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..46cb984db5bccd35a567fa5877dac0ab14ac3006 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b608443592bf8acb3c182c2333c2638bd122f9c4795e42c6f78363d00d99203f +size 4078 diff --git a/data/2025/2504_10xxx/2504.10514/images/d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg b/data/2025/2504_10xxx/2504.10514/images/d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d6acd24dbd904772b0d64d4a02f680c402424aa --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9f81387bb535d5a0a694d433e353a32f01d9783304a160aa3d9627a161b37a1 +size 30090 diff --git a/data/2025/2504_10xxx/2504.10514/images/d9a74d2d06d6bc02d62e50fbf3d1af7d17dac77d6d94345ca9038a8beb3a14fc.jpg b/data/2025/2504_10xxx/2504.10514/images/d9a74d2d06d6bc02d62e50fbf3d1af7d17dac77d6d94345ca9038a8beb3a14fc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d77fd29143de8816417e62d3cd766ee13004d31 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/d9a74d2d06d6bc02d62e50fbf3d1af7d17dac77d6d94345ca9038a8beb3a14fc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28a17cdde4e5838bc11d28976505113781923f4c746baf251adcf26a58449700 +size 214438 diff --git a/data/2025/2504_10xxx/2504.10514/images/dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg b/data/2025/2504_10xxx/2504.10514/images/dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg new file mode 100644 index 0000000000000000000000000000000000000000..74a9e5913a07b92b1002a93c8b8a939e7edb5e9f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:408d5931fcbfa9a94cb4c31f03b4160e9ba57e8570a63afffb61ab0e45b73d25 +size 3137 diff --git a/data/2025/2504_10xxx/2504.10514/images/dd58b55e29f30c324245c853130868c2b7d326483e1f84d0e7d4d40a90702f97.jpg b/data/2025/2504_10xxx/2504.10514/images/dd58b55e29f30c324245c853130868c2b7d326483e1f84d0e7d4d40a90702f97.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a1966b252eb640f4cb6291f3e570f995d8ee6962 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/dd58b55e29f30c324245c853130868c2b7d326483e1f84d0e7d4d40a90702f97.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1dc3fcb3be5b45e681a7398d4a4ceb4dbb4ba14008a051afc1fffe3bdba7bdc3 +size 41081 diff --git a/data/2025/2504_10xxx/2504.10514/images/de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg b/data/2025/2504_10xxx/2504.10514/images/de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b97b910c91d0cc8b4127545312ffaab9ab3e5ca7 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0bd4b714ae64c601909833abfdd26d2d9db77b2c183abcc56d41d209527c5901 +size 43701 diff --git a/data/2025/2504_10xxx/2504.10514/images/de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg b/data/2025/2504_10xxx/2504.10514/images/de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ca34cae41ad9045d4d92d201913b8a60af49279f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be4149e75ce7865b7cebc23052aa24e27f345a12500f6e057ff46754181cb821 +size 19829 diff --git a/data/2025/2504_10xxx/2504.10514/images/e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg b/data/2025/2504_10xxx/2504.10514/images/e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..cc61a5fbcb4073e157cfc42208368df29d411049 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa4cd0d4dfdce1f192995d5ce2ae6f0570d86b2331c2daddc3d48a0477f50198 +size 2608 diff --git a/data/2025/2504_10xxx/2504.10514/images/e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg b/data/2025/2504_10xxx/2504.10514/images/e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg new file mode 100644 index 0000000000000000000000000000000000000000..d5c77d30ca06e4f6adab44c575574c9b9e608c2a --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b17b0875d61fff3c6418810e37d57e078ebdcb250f2d523ff56c67ec0cc0913 +size 6674 diff --git a/data/2025/2504_10xxx/2504.10514/images/e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg b/data/2025/2504_10xxx/2504.10514/images/e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e82efe975106be52c4bac54a4786082a5e600223 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4356e696fccced3f3fa5c6089915103a1905423210f5e9bc7d4e422ca3155715 +size 21902 diff --git a/data/2025/2504_10xxx/2504.10514/images/e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg b/data/2025/2504_10xxx/2504.10514/images/e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg new file mode 100644 index 0000000000000000000000000000000000000000..85ed145aae0683f3463441810728cbffc24af834 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2c6ff96899d1c2b201135f7159c06f27b2ff169feb59bf51b217284d92b206c +size 13113 diff --git a/data/2025/2504_10xxx/2504.10514/images/ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg b/data/2025/2504_10xxx/2504.10514/images/ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f655be75dcda6d3d65f3fb885973d4aedf4928e4 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18bdaa179d8bd02bad2880b3ec8c629ef882c12605557f4cc10331b877fff90c +size 3535 diff --git a/data/2025/2504_10xxx/2504.10514/images/f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg b/data/2025/2504_10xxx/2504.10514/images/f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7d295d6dd16b63da7cf878cf4d8ff4d24becae83 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c0f9607a07cdfd58b5f142b3b286960b7a4292886fd64e0021f03e5284a2038 +size 4622 diff --git a/data/2025/2504_10xxx/2504.10514/images/f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg b/data/2025/2504_10xxx/2504.10514/images/f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..df97b70f2f0495439e8eee7624459fc16e960313 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:510eedef10d8b4a16ea3000587bc226df1016f9fa3fef3f549792cebbabd21ca +size 5119 diff --git a/data/2025/2504_10xxx/2504.10514/images/f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg b/data/2025/2504_10xxx/2504.10514/images/f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7318208fa7f036b03a1890ad3db14a113dc3d902 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29cb0fc74ad3f7c42c7a0b573169bdc0e47c9b5fc2c733fe28dbf859b66a12fb +size 21694 diff --git a/data/2025/2504_10xxx/2504.10514/images/f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg b/data/2025/2504_10xxx/2504.10514/images/f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg new file mode 100644 index 0000000000000000000000000000000000000000..40333ac2e5c7456c789d00a2c8bc479767685b7e --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f624e619eac0925763cab34f417609c2a5cae5f5281e11c3702929c0c666387d +size 3402 diff --git a/data/2025/2504_10xxx/2504.10514/images/f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg b/data/2025/2504_10xxx/2504.10514/images/f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8c31253ba18da4a4a33186df99dc02f3c9e2f7cd --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ff45d0c8a20f5fa539f95cb77c95f5a19f2ecfcc3694a9f6827a578db565846 +size 1458 diff --git a/data/2025/2504_10xxx/2504.10514/images/f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg b/data/2025/2504_10xxx/2504.10514/images/f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dc718b798c3c38d812a37436f022f57ba7f827f3 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfd5bddfa2ff00631cf7f01496569cc0983c26e1668f9ade7b1737080edf4de0 +size 9358 diff --git a/data/2025/2504_10xxx/2504.10514/images/f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg b/data/2025/2504_10xxx/2504.10514/images/f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ad197517b4d8abed93f26149998551b571313d54 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf6fb1a05f44bd150f4e7463b700358accdd6e45ae5d2759148e81664e21b95f +size 5519 diff --git a/data/2025/2504_10xxx/2504.10514/images/fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg b/data/2025/2504_10xxx/2504.10514/images/fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg new file mode 100644 index 0000000000000000000000000000000000000000..dec026ecdeebfa903cb239c35c0fdb9949712b5f --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66ad6c35285cea9e41c83b67394788eca764d1fc308e84f65c3cf67fe2205fcd +size 34415 diff --git a/data/2025/2504_10xxx/2504.10514/images/fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg b/data/2025/2504_10xxx/2504.10514/images/fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d1b047eba358709e489b40d8e32853a44f6fef9 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:178c8df9d017bd9462c26697f5dacc2bfc407dfd864a023306acd4fb3ae39c17 +size 35841 diff --git a/data/2025/2504_10xxx/2504.10514/images/faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg b/data/2025/2504_10xxx/2504.10514/images/faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..751a8d14323c3aa0677ca158a361b8ad72ec0d2c --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2583c2ad721cbab24dfd468744fb69fbbfb224a630a8581eb52c411599980933 +size 33881 diff --git a/data/2025/2504_10xxx/2504.10514/images/fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg b/data/2025/2504_10xxx/2504.10514/images/fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg new file mode 100644 index 0000000000000000000000000000000000000000..31771266ac654aa812b8bff4b58dc885f64dfcaa --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4c58d0eaa83cb8bb848da358bcf40e7c3ac865259e89b5bfa86ef233804fcc43 +size 3180 diff --git a/data/2025/2504_10xxx/2504.10514/images/fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg b/data/2025/2504_10xxx/2504.10514/images/fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg new file mode 100644 index 0000000000000000000000000000000000000000..99bfbd3972bd659a536ec8905d1ed4ad1a7015ef --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfc773e0dd93a30dbb76d327e8c004ea5407ab66a16c78e27d40093c56266318 +size 3917 diff --git a/data/2025/2504_10xxx/2504.10514/images/ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg b/data/2025/2504_10xxx/2504.10514/images/ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg new file mode 100644 index 0000000000000000000000000000000000000000..22d3898ff1b881f769e2b1dade659f516fc6af66 --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79be8996a6d1af3f555b44ff4a6047f20a69129ad01bb1d8f6fe21731c12ebe0 +size 5751 diff --git a/data/2025/2504_10xxx/2504.10514/images/ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg b/data/2025/2504_10xxx/2504.10514/images/ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3aa42f0fc774cdeb620bca72b15ee14788466d8e --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/images/ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ad595ba34dc8d857df629aa97241272362bf5a258dbce8601d2430976e0a6b7f +size 4371 diff --git a/data/2025/2504_10xxx/2504.10514/layout.json b/data/2025/2504_10xxx/2504.10514/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3c7170b8b449775d51132928f968664348085dbe --- /dev/null +++ b/data/2025/2504_10xxx/2504.10514/layout.json @@ -0,0 +1,40377 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 112, + 97, + 500, + 158 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 97, + 500, + 158 + ], + "spans": [ + { + "bbox": [ + 112, + 97, + 500, + 158 + ], + "type": "text", + "content": "COLORBENCH: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 141, + 198, + 468, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 198, + 468, + 220 + ], + "spans": [ + { + "bbox": [ + 141, + 198, + 468, + 220 + ], + "type": "text", + "content": "Yijun Liang\\*, Ming Li\\*, Chenrui Fan, Ziyue Li, Dang Nguyen, Kwesi Cobbina Shweta Bhardwaj, Jiuhai Chen, Fuxiao Liu, Tianyi Zhou" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 230, + 221, + 381, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 230, + 221, + 381, + 232 + ], + "spans": [ + { + "bbox": [ + 230, + 221, + 381, + 232 + ], + "type": "text", + "content": "University of Maryland, College Park" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 217, + 233, + 394, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 233, + 394, + 243 + ], + "spans": [ + { + "bbox": [ + 217, + 233, + 394, + 243 + ], + "type": "text", + "content": "{yliang17,minglii,tianyi}@umd.edu" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 181, + 243, + 429, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 243, + 429, + 255 + ], + "spans": [ + { + "bbox": [ + 181, + 243, + 429, + 255 + ], + "type": "text", + "content": "Project: https://github.com/tianyi-lab/ColorBench" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 281, + 283, + 329, + 295 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 283, + 329, + 295 + ], + "spans": [ + { + "bbox": [ + 281, + 283, + 329, + 295 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 140, + 306, + 470, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 306, + 470, + 525 + ], + "spans": [ + { + "bbox": [ + 140, + 306, + 470, + 525 + ], + "type": "text", + "content": "Color plays an important role in human perception and usually provides critical clues in visual reasoning. However, it is unclear whether and how vision-language models (VLMs) can perceive, understand, and leverage color as humans. This paper introduces \"COLORBENCH\", an innovative benchmark meticulously crafted to assess the capabilities of VLMs in color understanding, including color perception, reasoning, and robustness. By curating a suite of diverse test scenarios, with grounding in real applications, COLORBENCH evaluates how these models perceive colors, infer meanings from color-based cues, and maintain consistent performance under varying color transformations. Through an extensive evaluation of 32 VLMs with varying language models and vision encoders, our paper reveals some undiscovered findings: (i) The scaling law (larger models are better) still holds on COLORBENCH, while the language model plays a more important role than the vision encoder. (ii) However, the performance gaps across models are relatively small, indicating that color understanding has been largely neglected by existing VLMs. (iii) CoT reasoning improves color understanding accuracies and robustness, though they are vision-centric tasks. (iv) Color clues are indeed leveraged by VLMs on COLORBENCH but they can also mislead models in some tasks. These findings highlight the critical limitations of current VLMs and underscore the need to enhance color comprehension. Our COLORBENCH can serve as a foundational tool for advancing the study of human-level color understanding of multimodal AI." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 532, + 192, + 544 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 532, + 192, + 544 + ], + "spans": [ + { + "bbox": [ + 105, + 532, + 192, + 544 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 555, + 506, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 555, + 506, + 698 + ], + "spans": [ + { + "bbox": [ + 104, + 555, + 506, + 698 + ], + "type": "text", + "content": "Color is widely recognized as a fundamental component of human visual perception [11, 34], playing a critical role and providing critical clues in object detection, scene interpretation, contextual understanding, planning, etc., across critical application scenarios such as scientific discovery, medical care, remote sensing, shopping, visualization, artwork interpretation, etc. For instance, [19] leverages spectral color signatures to distinguish vegetation, health, and water bodies in satellite imagery, and [1] utilizes sediment color patterns to detect marine ecosystems. These applications underscore how color-driven features play an important role in real-world scenarios. Moreover, colors can convey affective or semantic information beyond simply recognizing and naming colors since colors are highly correlated to other attributes or concepts and thus can provide key information to various downstream tasks that do not even directly ask about colors [18, 37, 45]. As modern vision-language models (VLMs) [12, 41, 48] continue to be deployed to increasingly diverse scenarios, color—an essential visual feature—plays a growing role in the processes of understanding and reasoning. It is essential to examine whether and how these models can understand and leverage color information" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 221, + 35, + 567 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 221, + 35, + 567 + ], + "spans": [ + { + "bbox": [ + 14, + 221, + 35, + 567 + ], + "type": "text", + "content": "arXiv:2504.10514v3 [cs.CV] 8 Nov 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 704, + 293, + 715 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 704, + 293, + 715 + ], + "spans": [ + { + "bbox": [ + 116, + 704, + 293, + 715 + ], + "type": "text", + "content": "*These authors contributed equally to this work." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 731, + 506, + 742 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 731, + 506, + 742 + ], + "spans": [ + { + "bbox": [ + 105, + 731, + 506, + 742 + ], + "type": "text", + "content": "39th Conference on Neural Information Processing Systems (NeurIPS 2025) Track on Datasets and Benchmarks." + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 70, + 195, + 338 + ], + "blocks": [ + { + "bbox": [ + 106, + 70, + 195, + 338 + ], + "lines": [ + { + "bbox": [ + 106, + 70, + 195, + 338 + ], + "spans": [ + { + "bbox": [ + 106, + 70, + 195, + 338 + ], + "type": "image", + "image_path": "8279797222a7f9ff129da461aa82b23fd1a408942d36c4408bd9d1f52ac16a78.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 343, + 506, + 396 + ], + "lines": [ + { + "bbox": [ + 104, + 343, + 506, + 396 + ], + "spans": [ + { + "bbox": [ + 104, + 343, + 506, + 396 + ], + "type": "text", + "content": "Figure 1: Test samples from COLORBENCH. COLORBENCH evaluates VLMs across three core capabilities: Perception, Reasoning and Robustness. The benchmark comprises 11 tasks designed to assess fine-grained color understanding abilities and the effect of color on other reasoning skills, including counting, proportion calculation, and robustness estimation. With over 1,400 instances, COLORBENCH covers a wide range of real-world application scenarios, including painting analysis, test kit readings, shopping, satellite/wildlife image analysis, etc." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 198, + 70, + 411, + 338 + ], + "blocks": [ + { + "bbox": [ + 198, + 70, + 411, + 338 + ], + "lines": [ + { + "bbox": [ + 198, + 70, + 411, + 338 + ], + "spans": [ + { + "bbox": [ + 198, + 70, + 411, + 338 + ], + "type": "image", + "image_path": "62255370c80cc1ec826a893befaf91071bf2e821de60302188c5691ca72d3a70.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 416, + 70, + 496, + 338 + ], + "blocks": [ + { + "bbox": [ + 416, + 70, + 496, + 338 + ], + "lines": [ + { + "bbox": [ + 416, + 70, + 496, + 338 + ], + "spans": [ + { + "bbox": [ + 416, + 70, + 496, + 338 + ], + "type": "image", + "image_path": "afe37da8b79d3de1c08005a13422fd9bd97e612a82e905ce643e337d2059ccb3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 399, + 504, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 399, + 504, + 434 + ], + "spans": [ + { + "bbox": [ + 104, + 399, + 504, + 434 + ], + "type": "text", + "content": "as in human perception and reasoning, how color influences their overall perceptual and reasoning capabilities, and whether they can interpret visual illusions, resolve ambiguous cues, and maintain reliable performance under color variations." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 437, + 506, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 437, + 506, + 548 + ], + "spans": [ + { + "bbox": [ + 104, + 437, + 506, + 548 + ], + "type": "text", + "content": "However, existing benchmarks for VLMs mainly focus on tasks that may not heavily depend on color understanding or require color-centric reasoning, thereby overlooking nuanced color-related factors [25, 29]. Hence, there is a lack of benchmarks that systematically assess how well VLMs understand color when it serves as the main or distinguishing feature of a scene and key information to a task. Moreover, robustness to variations in color, such as recoloring and shifting hues, has also been largely neglected in the LLM era [6, 8, 20]. Consequently, it remains unclear whether VLMs can perceive and reason about color with human-like proficiency and to what extent their performance deteriorates under significant color perturbations. This shortfall underscores the need for a dedicated benchmark that comprehensively probes various facets of color comprehension in VLMs. A detailed discussion of related works is provided in Appendix A." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 552, + 506, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 552, + 506, + 674 + ], + "spans": [ + { + "bbox": [ + 104, + 552, + 506, + 674 + ], + "type": "text", + "content": "To bridge this gap, we propose a novel benchmark, COLORBENCH, that aims at comprehensively evaluating VLMs on three core capabilities of color understanding: Color Perception, Color Reasoning, and Color Robustness. Color Perception examines VLMs' fundamental capability to correctly detect and interpret colors from inputs. Color Reasoning refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. Color Robustness assesses how consistently VLMs perform when an image's colors are altered, ensuring they maintain accurate predictions across different color variants of an image. Under these three core dimensions, 11 fine-grained tasks assessing different aspects of color understanding capabilities are formulated as shown in Figure 1, which not only shows test examples in COLORBENCH but also presents potential real-world applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "content": "By focusing on these facets, COLORBENCH offers a granular view of VLMs' capabilities in color understanding, aiming to illuminate both their strengths and shortcomings. We evaluate 32 widely used VLMs in our benchmark, ranging from open-source to proprietary models, from relatively small models (0.5B) to larger models (78B), and obtain some unrevealed observations." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 139 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 139 + ], + "type": "text", + "content": "Main Contribution. We introduce \"COLORBENCH\", the first dedicated benchmark for assessing the color perception, reasoning, and robustness of VLMs. We develop an evaluation suite for 11 color-centric tasks, covering diverse application scenarios and practical challenges. Moreover, we report a fine-grained empirical evaluation of 32 state-of-the-art VLMs, which exposes their limitations in color understanding and offers novel insights for future research. Our key findings are highlighted in the following:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 127, + 148, + 504, + 293 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 129, + 148, + 504, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 148, + 504, + 180 + ], + "spans": [ + { + "bbox": [ + 129, + 148, + 504, + 180 + ], + "type": "text", + "content": "1. The scaling law still holds for color understanding but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 127, + 185, + 504, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 185, + 504, + 219 + ], + "spans": [ + { + "bbox": [ + 127, + 185, + 504, + 219 + ], + "type": "text", + "content": "2. The absolute performances of different VLMs are relatively low, and the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 127, + 222, + 504, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 222, + 504, + 255 + ], + "spans": [ + { + "bbox": [ + 127, + 222, + 504, + 255 + ], + "type": "text", + "content": "3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 127, + 259, + 504, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 259, + 504, + 293 + ], + "spans": [ + { + "bbox": [ + 127, + 259, + 504, + 293 + ], + "type": "text", + "content": "4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLOR-BENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to give wrong answers, and converting colorful images into grayscale can improve the accuracy." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 308, + 274, + 321 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 308, + 274, + 321 + ], + "spans": [ + { + "bbox": [ + 105, + 308, + 274, + 321 + ], + "type": "text", + "content": "2 COLORBENCH Construction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 333, + 306, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 333, + 306, + 509 + ], + "spans": [ + { + "bbox": [ + 104, + 333, + 306, + 509 + ], + "type": "text", + "content": "We present COLORBENCH, the first benchmark explicitly designed to comprehensively evaluate the color understanding capabilities of VLMs across three key dimensions: Color Perception, Color Reasoning, and Color Robustness. This benchmark consists of 1,448 instances and 5,814 image-text questions spanning 11 diverse tasks. For the Color Perception and Color Reasoning categories, each instance contains an image, a question, and multiple-choice (3 to 6) options, with only one correct answer. For Color Robustness, each instance consists of 10 multiple-choice image-text questions, including a seed image and 9 edited images with color changes. Given that color is a fundamental visual feature influencing most vision-related tasks, disentangling color under" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 311, + 327, + 504, + 475 + ], + "blocks": [ + { + "bbox": [ + 311, + 327, + 504, + 475 + ], + "lines": [ + { + "bbox": [ + 311, + 327, + 504, + 475 + ], + "spans": [ + { + "bbox": [ + 311, + 327, + 504, + 475 + ], + "type": "image", + "image_path": "a8629b08764230a78d2ec89a49fcfb6ca0d216b62038d6980111f243799ccd7d.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 479, + 504, + 502 + ], + "lines": [ + { + "bbox": [ + 310, + 479, + 504, + 502 + ], + "spans": [ + { + "bbox": [ + 310, + 479, + 504, + 502 + ], + "type": "text", + "content": "Figure 2: Statistics of 3 categories and 11 tasks in COLORBENCH." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 509, + 506, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 509, + 506, + 542 + ], + "spans": [ + { + "bbox": [ + 104, + 509, + 506, + 542 + ], + "type": "text", + "content": "standing from other general capabilities (e.g., object recognition, counting) is challenging. To address this, we design questions with explicit color constraints for Color Perception and Reasoning dimensions, enabling a focused evaluation of VLMs' perception and reasoning abilities in relation to color." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 555, + 175, + 567 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 555, + 175, + 567 + ], + "spans": [ + { + "bbox": [ + 105, + 555, + 175, + 567 + ], + "type": "text", + "content": "2.1 Taxonomy" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 575, + 506, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 575, + 506, + 609 + ], + "spans": [ + { + "bbox": [ + 104, + 575, + 506, + 609 + ], + "type": "text", + "content": "Motivated by the existing evaluation criteria from prior benchmarks and real-world application scenarios, we categorize the color understanding capability into 3 core dimensions and 11 detailed axes, as shown in Figure 1. The detailed question templates and sample cases are shown in Appendix D." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 620, + 211, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 620, + 211, + 633 + ], + "spans": [ + { + "bbox": [ + 105, + 620, + 211, + 633 + ], + "type": "text", + "content": "2.1.1 Color Perception" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 639, + 505, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 639, + 505, + 673 + ], + "spans": [ + { + "bbox": [ + 104, + 639, + 505, + 673 + ], + "type": "text", + "content": "This core dimension refers to the fundamental capability to correctly detect and interpret colors from inputs. We assess this capability through 3 key aspects: i) Color Recognition, ii) Color Extraction, and iii) Object Recognition." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "text", + "content": "Color Recognition includes questions that either ask for the color of a given object or determine whether a specific color is present in the image. Color Extraction requires the model to extract the value of color code (e.g., RGB, HSV, or HEX) for a given single color image. This task measures the ability to perform fine-grained color retrieval from visual input. Object Recognition evaluates the" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 97 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 97 + ], + "type": "text", + "content": "model's capability to identify objects that match a specified color described in the text input. These two tasks require VLMs to be able to detect and interpret the color in either the image or text input." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 106, + 211, + 118 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 106, + 211, + 118 + ], + "spans": [ + { + "bbox": [ + 105, + 106, + 211, + 118 + ], + "type": "text", + "content": "2.1.2 Color Reasoning" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 124, + 506, + 170 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 124, + 506, + 170 + ], + "spans": [ + { + "bbox": [ + 104, + 124, + 506, + 170 + ], + "type": "text", + "content": "This dimension refers to the reasoning skills to draw further conclusions based on the understanding of colors from input and prior knowledge, in which colors act as a crucial clue to formulate accurate judgments. This category encapsulates 7 key aspects: i) Color Proportion, ii) Color Comparison, iii) Color Counting, iv) Object Counting, v) Color Illusion, vii) Color Mimicry and viii) Color Blindness." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 173, + 506, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 173, + 506, + 360 + ], + "spans": [ + { + "bbox": [ + 104, + 173, + 506, + 360 + ], + "type": "text", + "content": "Color Proportion tests the model's capability to estimate the relative area occupied by a specific color. Questions in this task require both color perception and proportion calculation capabilities. Color Comparison requires the model to be able to distinguish among multiple colors in the image, assessing its sensitivity to hue, saturation, and brightness differences in visual input. Color Counting focuses on identifying the number of unique colors in the image, evaluating the model's perception and differentiation of distinct color variations, and counting ability. Object Counting extends this challenge by requiring the model to count objects that match a specific color pattern. This task requires an integration of object recognition and color perception. Color Illusion questions query VLMs to compare colors in potential illusionary environments. This task evaluates the model's ability to account for color-induced optical illusions. Color Mimicry challenges the model to detect objects camouflaged within their surroundings, where color serves as a misleading factor, requiring advanced pattern recognition and contextual reasoning. These two tasks both assess the model's ability to make correct predictions under the misleading of color-related information in visual input. Color Blindness, inspired by Ishihara tests, assesses the model's ability to recognize numbers or text embedded in color patterns, testing its understanding of shape-color relationships. These 7 tasks comprehensively assess the model's capacity for logical reasoning, spatial awareness, and adaptive interpretation of color-based visual cues." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 369, + 214, + 381 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 369, + 214, + 381 + ], + "spans": [ + { + "bbox": [ + 105, + 369, + 214, + 381 + ], + "type": "text", + "content": "2.1.3 Color Robustness" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 388, + 298, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 388, + 298, + 553 + ], + "spans": [ + { + "bbox": [ + 104, + 388, + 298, + 553 + ], + "type": "text", + "content": "Color Robustness assesses how consistently VLMs perform and whether they can consistently deliver accurate predictions under color variants of a given image. It involves measuring the stability of a VLM's responses when confronted with the same text input and a series of recolored images. To ensure that color does not influence the predictions, we select questions and corresponding answers that are independent of color attributes. Under these conditions, a robust model should produce unchanged predictions regardless of recoloring manipulation. Any variation in the model's responses is then used to quantify its susceptibility to color changes, providing a direct measure of robustness." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 565, + 194, + 576 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 565, + 194, + 576 + ], + "spans": [ + { + "bbox": [ + 105, + 565, + 194, + 576 + ], + "type": "text", + "content": "2.2 Data Curation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 585, + 298, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 585, + 298, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 585, + 298, + 651 + ], + "type": "text", + "content": "For most of the tasks in the category of Color Perception and Color Reasoning, we rely on human experts to manually collect images from multiple online benchmarks and websites. For the Color Proportion task, to ensure the correctness of the ground truth, an extra color extrac" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 651, + 506, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 651, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 651, + 506, + 696 + ], + "type": "text", + "content": "tion tool is firstly utilized to obtain the color histogram of the image. Questions and options are then manually designed based on these color statistics. For tasks including Color Extraction, Color Blindness, and Color Illusion, testing images are generated by corresponding code programs to ensure the controllability of the questions and answers. The detailed data sources are shown in Appendix B." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 700, + 505, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 505, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 505, + 723 + ], + "type": "text", + "content": "After the initial data is collected, additional filtering processes are conducted in a human-machine interactive process. We first conduct inference on a variety of VLMs and discard low-quality samples" + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 314, + 378, + 496, + 560 + ], + "blocks": [ + { + "bbox": [ + 314, + 378, + 496, + 560 + ], + "lines": [ + { + "bbox": [ + 314, + 378, + 496, + 560 + ], + "spans": [ + { + "bbox": [ + 314, + 378, + 496, + 560 + ], + "type": "image", + "image_path": "fab223acc9a737e5c5aab799bb97a9cdd4f68d9665b063bd7bf99c1fcdcd44bf.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "lines": [ + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "spans": [ + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "text", + "content": "Figure 3: Generation Pipeline for Color Robustness. For each seed image, we apply 3 recoloring strategies (Entire Image, Target Segment, Largest Segment) to generate edited images. For each strategy, we change the color of the recoloring region via shifting the Hue values by " + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "inline_equation", + "content": "90^{\\circ}" + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "text", + "content": ", or " + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "inline_equation", + "content": "270^{\\circ}" + }, + { + "bbox": [ + 302, + 567, + 506, + 646 + ], + "type": "text", + "content": " in HSV color space." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 301, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 301, + 741, + 309, + 750 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": "based on the GPT-4o prediction result and human evaluation. For synthesized data, similar processes are conducted, but with additional code (for generation) and image assessment. The above process is conducted in three rounds before the final benchmark instances are settled. This refinement process ensures COLORBENCH a rigorous and informative benchmark for assessing color-related understanding." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "spans": [ + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "text", + "content": "For Color Robustness, we create evaluation instances by modifying images or specific regions through color changes. We define 3 recoloring strategies to determine the recoloring region: i) Entire Image, where the whole image is recolored; ii) Target Segment, where only the segment relevant to the question is altered; and iii) Largest Segment, where the largest region unrelated to the question is modified. Further details can be found in Appendix C. While generating color variants, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region and perform recoloring by shifting the Hue value in the HSV color space at three levels to cover entire color wheel: " + }, + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "inline_equation", + "content": "(90^{\\circ}, 180^{\\circ}," + }, + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "inline_equation", + "content": "270^{\\circ})" + }, + { + "bbox": [ + 104, + 121, + 506, + 264 + ], + "type": "text", + "content": ". This process produces 9 variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 276, + 212, + 287 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 276, + 212, + 287 + ], + "spans": [ + { + "bbox": [ + 105, + 276, + 212, + 287 + ], + "type": "text", + "content": "2.3 Evaluation Metrics" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 297, + 504, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 297, + 504, + 331 + ], + "spans": [ + { + "bbox": [ + 104, + 297, + 504, + 331 + ], + "type": "text", + "content": "For Perception and Reasoning, we use accuracy as the evaluation metric, as all tasks follow a multiple-choice format. Accuracy is computed per task and per category, representing the proportion of correctly answered questions." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "spans": [ + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": "For Robustness, we evaluate a model's ability to maintain consistent accurate predictions under color variations. As detailed in Section 2.2, each seed image " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "I_{s}" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " is transformed into " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " recolored variants using recoloring strategies, while keeping the original question " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " unchanged. A model " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "\\mathcal{M}" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " is considered robust on a seed image " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "I_{s}" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " and corresponding question " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " if and only if it provides a correct prediction for " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "I_{s}" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " and maintains correct on all " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " recolored versions. To quantify robustness, we define the instance-level robustness metric " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "R(I_s,q)\\in \\{0,1\\}" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": " and a model-level robustness metric " + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "inline_equation", + "content": "Robust_{\\mathcal{M}}\\in [0,1]" + }, + { + "bbox": [ + 104, + 335, + 505, + 401 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "spans": [ + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": "Instance-level Robustness. Let the recolored images be " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "I_1, \\dots, I_n" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " and the generation output of model for image " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "I_i" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " and question " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "\\mathcal{M}(I_i, q)" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": ". Define " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "c(\\mathcal{M}(I_i, q))" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " as the model correctness: " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "c(\\mathcal{M}(I_i, q)) = 1" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " if model result " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "\\mathcal{M}(I_i, q)" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " is correct, otherwise 0. The instance-level robustness metric " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "R(I_s, q)" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " for a seed image " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "I_s" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " and question " + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 404, + 504, + 446 + ], + "type": "text", + "content": " is defined as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 189, + 449, + 505, + 479 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 449, + 505, + 479 + ], + "spans": [ + { + "bbox": [ + 189, + 449, + 505, + 479 + ], + "type": "interline_equation", + "content": "R \\left(I _ {s}, q\\right) = \\left\\{ \\begin{array}{l l} 1 & \\text {i f} c \\left(\\mathcal {M} \\left(I _ {i}, q\\right)\\right) = c \\left(\\mathcal {M} \\left(I _ {s}, q\\right)\\right) = 1, \\forall i \\in [ n ] \\\\ 0 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {1}", + "image_path": "d12f3d56e223e8c4c4ffd1e4211bba0e65511b1dd1838ddb833ecb814d0e653a.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 486, + 465, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 486, + 465, + 497 + ], + "spans": [ + { + "bbox": [ + 104, + 486, + 465, + 497 + ], + "type": "text", + "content": "Overall Robustness. Let " + }, + { + "bbox": [ + 104, + 486, + 465, + 497 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 104, + 486, + 465, + 497 + ], + "type": "text", + "content": " be the set of seed images. We define model robustness to be:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 209, + 500, + 504, + 527 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 209, + 500, + 504, + 527 + ], + "spans": [ + { + "bbox": [ + 209, + 500, + 504, + 527 + ], + "type": "interline_equation", + "content": "\\operatorname {R o b u s t} _ {\\mathcal {M}} = \\frac {\\sum_ {I _ {s} \\in \\mathcal {S}} R \\left(I _ {s}\\right)}{| \\mathcal {S} |}, \\operatorname {R o b u s t} _ {\\mathcal {M}} \\in [ 0, 1 ] \\tag {2}", + "image_path": "d3159d13d3adc9a24ba185559b6a755ba073de0943e39df62692520911738dd4.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "spans": [ + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "text", + "content": "Robust" + }, + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "inline_equation", + "content": "_{\\mathcal{M}}" + }, + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "text", + "content": " represents the proportion of seed images on which the model maintains correctness across all color variations. A model is more robust when Robust" + }, + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "inline_equation", + "content": "_{\\mathcal{M}}" + }, + { + "bbox": [ + 104, + 530, + 504, + 553 + ], + "type": "text", + "content": " is higher." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 567, + 236, + 581 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 567, + 236, + 581 + ], + "spans": [ + { + "bbox": [ + 105, + 567, + 236, + 581 + ], + "type": "text", + "content": "3 Experimental Results" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 591, + 187, + 602 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 591, + 187, + 602 + ], + "spans": [ + { + "bbox": [ + 105, + 591, + 187, + 602 + ], + "type": "text", + "content": "3.1 Main Results" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 612, + 504, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 612, + 504, + 667 + ], + "spans": [ + { + "bbox": [ + 104, + 612, + 504, + 667 + ], + "type": "text", + "content": "Table 1 presents the performances of a wide range of VLMs, along with human evaluation results on our COLORBENCH. Human participants achieve the highest performance on all evaluated tasks across all models. Among the models, overall accuracy generally increases with model size, with larger models tend to outperform smaller models, and the two proprietary models, GPT-4o and Gemini-2-flash, perform the best" + }, + { + "bbox": [ + 104, + 612, + 504, + 667 + ], + "type": "inline_equation", + "content": "^2" + }, + { + "bbox": [ + 104, + 612, + 504, + 667 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 671, + 506, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 671, + 506, + 694 + ], + "spans": [ + { + "bbox": [ + 104, + 671, + 506, + 694 + ], + "type": "text", + "content": "Color Perception. In Color Recognition (C'Recog), most models perform well (above " + }, + { + "bbox": [ + 104, + 671, + 506, + 694 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 104, + 671, + 506, + 694 + ], + "type": "text", + "content": "), indicating that this task is relatively basic for color perception. Gemini-2 with CoT obtains the" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "content": "To examine the upper limits of VLM capabilities and benchmark against human-level performance, we also assess performance GPT-o3 on perception and reasoning tasks. The result is shown in Appendix H." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 106, + 133, + 504, + 459 + ], + "blocks": [ + { + "bbox": [ + 105, + 77, + 506, + 133 + ], + "lines": [ + { + "bbox": [ + 105, + 77, + 506, + 133 + ], + "spans": [ + { + "bbox": [ + 105, + 77, + 506, + 133 + ], + "type": "text", + "content": "Table 1: Performance of 32 VLMs (grouped by size) and human performance on COLORBENCH. Models are ranked within each group according to their overall performance on Color Perception and Reasoning (P & R Overall) tasks. For human evaluation, Color Extraction task is excluded, as humans are not attuned to precise color code differences. The best performance in each VLM group is highlighted in bold. For human evaluation, any instance surpassing all VLMs is marked in bold." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 133, + 504, + 459 + ], + "lines": [ + { + "bbox": [ + 106, + 133, + 504, + 459 + ], + "spans": [ + { + "bbox": [ + 106, + 133, + 504, + 459 + ], + "type": "table", + "html": "
Color PerceptionColor ReasoningP & RRobustness
C*RecogC*ExtractO*RecogC*PropC*CompC*CountO*CountC*IlluC*MimicC*BlindOverallC*Robust
VLMs: < 7B
LLaVA-OV-0.5B26.344.846.830.023.822.621.438.758.626.832.638.7
InternVL2-1B35.534.459.723.841.619.622.334.438.633.133.639.4
InternVL2-2B60.536.566.240.038.619.629.126.952.921.036.454.2
InternVL2.5-1B55.336.561.042.545.522.625.243.041.428.038.352.3
InternVL2.5-2B69.728.171.433.848.525.530.132.355.719.838.559.8
Qwen2.5-VL-3B72.438.574.043.848.522.625.243.045.724.241.163.7
Cambrian-3B67.131.366.247.550.525.529.144.161.422.341.559.0
VLMs: 7B - 8B
LLaVA-Next-v-7B29.038.557.121.334.723.525.238.741.417.831.252.1
LLaVA-Next-m-7B21.118.863.627.542.616.734.041.947.129.933.455.2
Eagle-X5-7B52.647.967.541.342.620.635.044.148.622.940.048.5
Cambrian-8B72.428.172.748.854.531.433.041.957.117.242.364.9
InternVL2-8B72.450.077.942.548.520.635.938.750.023.643.165.5
Eagle-X4-8B71.147.968.845.050.526.537.940.948.627.444.163.7
LLAVA-OV-7B71.153.181.852.553.519.626.248.448.623.644.774.0
InternVL2.5-8B77.647.983.150.062.425.533.034.452.919.845.269.8
Qwen2.5-VL-7B76.349.084.447.552.519.634.044.155.728.746.274.4
VLMs: 10B - 30B
LLaVA-Next-13B56.631.371.427.541.627.528.229.045.725.536.453.3
Cambrian-13B67.134.474.046.347.532.435.038.755.724.842.864.7
Eagle-X4-13B73.743.876.643.847.523.538.834.457.126.143.766.3
InternVL2-26B72.452.187.052.556.420.635.034.455.727.446.374.0
InternVL2.5-26B72.445.889.645.063.422.635.032.362.929.346.883.0
VLMs: 30B - 70B
Eagle-X5-34B79.027.180.548.848.523.535.937.660.025.543.467.1
Cambrian-34b75.057.377.950.046.522.632.037.664.324.245.367.7
InternVL2-40B72.452.183.151.361.419.635.934.458.621.045.678.7
LLAVA-Next-34b69.746.976.643.856.428.441.836.661.429.946.665.9
InternVL2.5-38B71.160.489.653.863.429.440.834.461.426.850.084.6
VLMs: > 70B
InternVL2-76B72.442.785.745.062.427.535.031.250.023.644.668.6
LLAVA-Next-72B72.454.279.241.349.524.535.933.348.634.445.266.5
InternVL2.5-78B75.058.381.843.868.327.536.934.461.428.748.886.2
LLAVA-OV-72B73.763.583.152.569.327.550.536.655.731.951.980.3
VLMs: Proprietary
GPT-4o76.340.680.538.366.330.429.150.570.058.652.946.2
Gemini-2-flash80.352.187.046.970.333.334.944.172.949.655.470.7
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.469.9
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.673.6
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0--
", + "image_path": "d9a74d2d06d6bc02d62e50fbf3d1af7d17dac77d6d94345ca9038a8beb3a14fc.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 465, + 504, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 465, + 504, + 533 + ], + "spans": [ + { + "bbox": [ + 104, + 465, + 504, + 533 + ], + "type": "text", + "content": "highest performance. In Color Extraction (C'Extra), to our surprise, the two powerful proprietary models without CoT prompting only reach the middle-tier performances, indicating the potential limitation on the color perception of their vision encoders. Similar to the Color Existence task, almost all the models perform well in Object Recognition (O'Recog), and the 2 proprietary models do not reach the top. This is probably due to the strong alignment between this task and the common training recipe, which includes abundant general object detection images." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": "Color Reasoning. In Color Proportion (C'Prop), even the best model, Gemini-2 with CoT, can only reach " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "58.0\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": " of the accuracy, which is almost only slightly better than random guessing, showcasing the supreme difficulty of this task. In Color Comparison (C'Comp), larger models perform better in this task, and the proprietary models with CoT reach the top performance unsurprisingly. Surprisingly, in Color Counting (C'Count), all models show extremely poor performances. The highest performance comes from Gemini-2 with CoT, exceeding the second place by 10 percent, although its performance is also unsatisfactory at only " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "43.1\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": ". In Object Counting (O'Count), surpassing the 2 proprietary models, LLaVA-OV-72B reaches the top and becomes the only model that exceeds " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": " of the accuracy. Similar to the findings from the Object Recognition task, this might be caused by the extremely adequate object detection tasks in open-sourced training recipes. In Color Illusion (C'Ilu), the accuracies of most models lie in the range of " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": ", and GPT-4o without CoT is the only one that exceeds " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": " of the accuracy. In Color Mimicry (C'Mimic), the 2 proprietary models reach the top, while more reasoning steps do not benefit a lot. In Color Blindness (C'Blind), most of the open-sourced models present accuracies under " + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 723 + ], + "type": "text", + "content": ". Considering the extremely practical usage of this scenario, we think the current community should pay more attention to this. Moreover, we also observe that, surprisingly, more reasoning steps benefit VLMs in the color blindness test, although it seems like a pure color perception task." + } + ] + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 121, + 504, + 166 + ], + "blocks": [ + { + "bbox": [ + 104, + 70, + 504, + 114 + ], + "lines": [ + { + "bbox": [ + 104, + 70, + 504, + 114 + ], + "spans": [ + { + "bbox": [ + 104, + 70, + 504, + 114 + ], + "type": "text", + "content": "Table 2: Spearman's rank correlation between VLM performance and different model parts' sizes on each task. L denotes the language model part's size and V represents the vision encoder part's size. We use “(*)” to mark correlations with p-values " + }, + { + "bbox": [ + 104, + 70, + 504, + 114 + ], + "type": "inline_equation", + "content": "\\leq 0.05" + }, + { + "bbox": [ + 104, + 70, + 504, + 114 + ], + "type": "text", + "content": ". It shows that the scaling law still holds for color understanding but it is much weaker." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 121, + 504, + 166 + ], + "lines": [ + { + "bbox": [ + 107, + 121, + 504, + 166 + ], + "spans": [ + { + "bbox": [ + 107, + 121, + 504, + 166 + ], + "type": "table", + "html": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'I'lluC'MimicC'BlindOverallC'Robust
L+V0.5657 (*)0.5255 (*)0.7107 (*)0.5125 (*)0.6358 (*)0.4316 (*)0.7566 (*)-0.34600.4832 (*)0.24600.7619 (*)0.7386 (*)
L0.5724 (*)0.4937 (*)0.6769 (*)0.4696 (*)0.6118 (*)0.4408 (*)0.7611 (*)-0.3697 (*)0.4559 (*)0.28240.7436 (*)0.7123 (*)
V0.3955 (*)0.28560.5465 (*)0.6242 (*)0.5295 (*)0.20890.3608-0.01270.6024 (*)-0.06790.5271 (*)0.5623 (*)
", + "image_path": "dd58b55e29f30c324245c853130868c2b7d326483e1f84d0e7d4d40a90702f97.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "spans": [ + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": "Color Robustness. In Color Robustness (C'Robust), a higher value represents better robustness towards color alteration. The only 4 models that exceed " + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": " are LLaVA-OV-72B, InternVL2.5-26B, InternVL2.5-38B, and InternVL2.5-78B, which utilize relatively larger vision encoders, InternViT-6B, compared with others (mostly only 300-400M). In the meantime, GPT-4o has a really low robustness " + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "inline_equation", + "content": "(46.2\\%)" + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": " to colors, indicating its vulnerable sensitivity to color changes, while Gemini-2 shows promising robustness " + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "inline_equation", + "content": "(70.7\\%)" + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": " towards colors. Moreover, another surprising observation is that even though only the colors are changed and all the original queries are kept, utilizing more reasoning steps can consistently improve robustness for GPT-4o " + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "inline_equation", + "content": "(+23.7\\%)" + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": " and Gemini-2 " + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "inline_equation", + "content": "(+2.9\\%)" + }, + { + "bbox": [ + 104, + 178, + 506, + 268 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 282, + 204, + 295 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 282, + 204, + 295 + ], + "spans": [ + { + "bbox": [ + 105, + 282, + 204, + 295 + ], + "type": "text", + "content": "3.2 Further Findings" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "spans": [ + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "text", + "content": "Since color-related tasks often involve abstract reasoning, language comprehension, and contextual interpretation, it is essential to assess not just the vision encoder but also part of the language model, which plays a critical role in processing and understanding such tasks. To quantitatively analyze the correlation between VLM performances on color understanding tasks and their sizes, Spearman's rank correlation is calculated between VLM performances and (i) overall model sizes " + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "inline_equation", + "content": "(\\mathbf{L} + \\mathbf{V})" + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "text", + "content": ", (ii) language model sizes " + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "inline_equation", + "content": "(\\mathbf{L})" + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "text", + "content": ", and (iii) vision encoder sizes " + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "inline_equation", + "content": "(\\mathbf{V})" + }, + { + "bbox": [ + 104, + 356, + 297, + 531 + ], + "type": "text", + "content": ". The correlation values and p-signs are presented in Table 2; a star is notated when the p-value of the correlation is lower than 0.05. It is observed that between the performances and language model" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 304, + 355, + 504, + 432 + ], + "blocks": [ + { + "bbox": [ + 110, + 307, + 500, + 342 + ], + "lines": [ + { + "bbox": [ + 110, + 307, + 500, + 342 + ], + "spans": [ + { + "bbox": [ + 110, + 307, + 500, + 342 + ], + "type": "text", + "content": "Finding 1. The scaling law still holds for color understanding, but is much weaker and mainly depends on the language model parts. The correlation between the performance and the vision encoder's size is not significant due to the limited choices in current VLMs." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 304, + 355, + 504, + 432 + ], + "lines": [ + { + "bbox": [ + 304, + 355, + 504, + 432 + ], + "spans": [ + { + "bbox": [ + 304, + 355, + 504, + 432 + ], + "type": "image", + "image_path": "9807b184126a48713b499dc098fc184ac4cce4081905a0b8ba74c79974403805.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 438, + 506, + 526 + ], + "lines": [ + { + "bbox": [ + 302, + 438, + 506, + 526 + ], + "spans": [ + { + "bbox": [ + 302, + 438, + 506, + 526 + ], + "type": "text", + "content": "Figure 4: The heatmaps related to performances and VLM sizes. Deeper color represents higher performance of P&R Overall Accuracy or Robustness. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 531, + 506, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 531, + 506, + 609 + ], + "spans": [ + { + "bbox": [ + 104, + 531, + 506, + 609 + ], + "type": "text", + "content": "sizes, most of the tasks have a correlation greater than 0.5 and a p-value smaller than 0.05, except for Color Illusion and Color Blindness due to their special characteristics. Since the correlation between overall model sizes " + }, + { + "bbox": [ + 104, + 531, + 506, + 609 + ], + "type": "inline_equation", + "content": "(\\mathbf{L} + \\mathbf{V})" + }, + { + "bbox": [ + 104, + 531, + 506, + 609 + ], + "type": "text", + "content": " and P&R Overall (0.7619), and Robustness (0.7390), we conclude that the color understanding, including Color Perception, Color Reasoning, and Color Robustness, still follows the scaling law of model sizes. Figure 4 presents the correlations between performances and model sizes in each model family. This visualization clearly shows the correlation between performances and model sizes; a larger model leads to higher performance within each model family." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "content": "However, between the performances and vision encoder sizes, most of the tasks either have a correlation lower than 0.5 or a p-value greater than 0.05, which is not sufficient to conclude with the evident positive correlation. Despite these findings, we try to avoid conveying the message that there is no positive correlation between performances and vision encoder sizes. We think it is because of the negligence of the current community to focus on the scaling laws of vision encoders. The vision encoders used in the current mainstream VLMs are constrained in a very small set: (i) most of the VLMs only use one type of vision encoders for the whole family, except for the InternVL2 and InternVL2.5 series; (ii) most of the VLMs use the vision encoder with the size of " + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "inline_equation", + "content": "300 - 400\\mathrm{M}" + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "content": ". These challenges make it hard to evaluate the scaling laws of vision encoders. Further visualizations are presented in Appendix L.2." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 110, + 504, + 158 + ], + "blocks": [ + { + "bbox": [ + 104, + 70, + 504, + 105 + ], + "lines": [ + { + "bbox": [ + 104, + 70, + 504, + 105 + ], + "spans": [ + { + "bbox": [ + 104, + 70, + 504, + 105 + ], + "type": "text", + "content": "Table 4: Adding reasoning steps can improve VLMs' performance on COLORBENCH. The change of accuracy brought by Chain of Thought (CoT) prompting on all tasks for GPT-4o and Gemini-2-flash. The last row presents the average improvement across both models." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 110, + 504, + 158 + ], + "lines": [ + { + "bbox": [ + 107, + 110, + 504, + 158 + ], + "spans": [ + { + "bbox": [ + 107, + 110, + 504, + 158 + ], + "type": "table", + "html": "
Color PerceptionColor ReasoningP & RColor Robustness
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverallC'Robust
GPT-4o Δ+1.3+14.6+2.6+6.1+5.0-3.9+3.9-6.4+7.1+8.2+4.5+23.7
Gemini-2 Δ+2.6+4.1+1.3+11.1-2.0+9.8+3.9-3.2+2.8+10.4+4.2+2.9
Average Δ+1.95+9.35+1.95+8.60+1.50+2.95+3.9-4.80+4.95+9.30+4.35+13.30
", + "image_path": "2fb6cc2e270b95a95b8c9a9c926d3138f9663a31c52a053bf3bcde3d8f8a1c81.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "spans": [ + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "content": "As shown in Table 3, we separate all the VLMs into several groups based on their sizes and present the best accuracy and the model name within each group. We can see that even the powerful proprietary models, GPT-4o and Gemini-2, can only reach an overall color perception and reasoning (P & R Overall) accuracy of " + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "inline_equation", + "content": "53.9\\%" + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "content": ", only " + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "inline_equation", + "content": "+2.0\\%" + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "content": " better than the best open-sourced model. Task-level results in Table 1 further reveal that these advanced proprietary models still exhibit substantial performance gaps compared to humans across most tasks. Moreover, the best model from group 1 has the accuracy of " + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "inline_equation", + "content": "41.5\\%" + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "content": " (Cambrian-3B), which is only " + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "inline_equation", + "content": "10.4\\%" + }, + { + "bbox": [ + 104, + 220, + 297, + 384 + ], + "type": "text", + "content": " lower than the best open-sourced" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 220, + 504, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 220, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 302, + 220, + 504, + 277 + ], + "type": "text", + "content": "Table 3: The best model within each group and its performances (on P&R accuracy and Robustness). The absolute performances of different VLMs on COLORBENCH are relatively low, and the performance gaps between models are not large." + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 304, + 281, + 504, + 369 + ], + "blocks": [ + { + "bbox": [ + 110, + 167, + 501, + 212 + ], + "lines": [ + { + "bbox": [ + 110, + 167, + 501, + 212 + ], + "spans": [ + { + "bbox": [ + 110, + 167, + 501, + 212 + ], + "type": "text", + "content": "Finding 2. The absolute performances of different VLMs are relatively low and lag behind those of humans. Moreover, the gaps between different models (open-source vs. proprietary, small vs. large) are not large, indicating the challenges of COLORBENCH and the negligence of color understanding in existing VLMs." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 304, + 281, + 504, + 369 + ], + "lines": [ + { + "bbox": [ + 304, + 281, + 504, + 369 + ], + "spans": [ + { + "bbox": [ + 304, + 281, + 504, + 369 + ], + "type": "table", + "html": "
Color P & R OverallColor Robustness
Model SizeModelBestModelBest
<7BCambrian-3B41.5Qwen2.5-VL-3B63.7
7B-8BQwen2.5-VL-7B46.2Qwen2.5-VL-7B74.4
10B-30BInternVL2.5-26B46.8InternVL2.5-26B83.0
30B-50BInternVL2.5-38B50.0InternVL2.5-38B84.6
>70BLLava-OV-72B51.9InternVL2.5-78B86.2
ProprietaryGemini-255.4Gemini-270.7
ProprietaryGemini-2 (CoT)59.6Gemini-2 (CoT)73.6
", + "image_path": "2022338c089cc9168d1bd7a010104472b5f57dfa0b5f37a9ac9f001bc1edc912.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 384, + 504, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 384, + 504, + 429 + ], + "spans": [ + { + "bbox": [ + 104, + 384, + 504, + 429 + ], + "type": "text", + "content": "model. As for the robustness, the powerful proprietary models even show weaker robustness than the 7B model. Considering the lack of existing benchmarks specifically evaluating VLMs' color understanding capabilities, we conclude that this area is long-neglected by the community, and the open-sourced community is still on the same page with the proprietary model providers." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 110, + 440, + 501, + 474 + ], + "lines": [ + { + "bbox": [ + 110, + 440, + 501, + 474 + ], + "spans": [ + { + "bbox": [ + 110, + 440, + 501, + 474 + ], + "type": "text", + "content": "Finding 3. Despite the weaknesses of VLMs on color understanding, adding reasoning steps can still improve their performance on COLORBENCH tasks, even for color robustness, which has not been investigated by the community." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 480, + 506, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 480, + 506, + 591 + ], + "spans": [ + { + "bbox": [ + 104, + 480, + 506, + 591 + ], + "type": "text", + "content": "The impact of using CoT prompting is shown in Table 4, in which we can see CoT improves the average P&R Overall accuracy across both models by " + }, + { + "bbox": [ + 104, + 480, + 506, + 591 + ], + "type": "inline_equation", + "content": "+4.35\\%" + }, + { + "bbox": [ + 104, + 480, + 506, + 591 + ], + "type": "text", + "content": ", indicating that reasoning benefits these color-related tasks. Within the category of Color Perception, the improvements from CoT on Color Recognition and Object Recognition are quite limited as these tasks heavily rely on the vision encoder. Figure 59 and 60 in Appendix M illustrate that adding reasoning steps does not take effect since the initial visual perception and color identification are incorrect in the slow thinking process. However, to our surprise, we find that the Color Extraction task benefits extremely from more reasoning steps, although it seems only related to the vision encoder. After a thorough investigation, we observe that most of the current VLMs are not capable of directly extracting color values, so they need to use more reasoning steps to reach reasonable answers." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 596, + 506, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 596, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 596, + 506, + 696 + ], + "type": "text", + "content": "Within the category of Color Reasoning, CoT benefits most of the tasks. However, in the Color Illusion task, CoT harms the model performance. After a manual investigation, we observe that more reasoning steps might cause VLMs to focus more on the misleading environments rather than directly compare the assigned colors, as shown in Figure 61. Another observation occurs in the Color Blindness task. Unlike other reasoning-related tasks, humans can read a color blindness test image with a simple glimpse without any slow thinking. This fascinating misalignment between humans and VLMs intrigues us to further investigation. We find that VLMs recognize these digits in a button-up pattern: they need to first infer that the dots in the image can form a digit before they really recognize these dots as digits." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "type": "text", + "content": "In addition, the consistent improvement of CoT on Color Robustness is also an unrevealed phenomenon. In our setting, only the colors of the image are altered, and the questions are strictly the" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": "same as the original. Thus, under this circumstance, color is the only variant, which is supposed to be more related to the capability of the vision encoder. However, counterintuitively, as shown in our experiments, more reasoning steps make the VLMs more robust to the color changes, which is probably caused by the higher confidence of correct answers after reasoning." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 170, + 298, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 170, + 298, + 410 + ], + "spans": [ + { + "bbox": [ + 104, + 170, + 298, + 410 + ], + "type": "text", + "content": "In order to examine whether VLMs really leverage color clues to handle tasks in COLORBENCH, experiments are conducted by converting all the original colorful images in the Color Perception and Reasoning categories into gray-scale ones, without changing the questions. Under this circumstance, the accuracies are expected to decrease dramatically as all our questions are related to colors. For quantitative analysis, we calculate the accuracy changing ratio as " + }, + { + "bbox": [ + 104, + 170, + 298, + 410 + ], + "type": "inline_equation", + "content": "(Acc_{ori} - Acc_{gray}) / Acc_{ori}" + }, + { + "bbox": [ + 104, + 170, + 298, + 410 + ], + "type": "text", + "content": " for each VLM on each task. This value directly represents how the original accuracy changes with a gray-scale transformation. The positive value represents that the VLM has a higher accuracy on the original colored images, indicating that it needs color clues to solve the task. Higher positive values represent higher significance of the color clues. On the contrary, if the value is negative, it means that the VLM can reach a better accuracy after the gray-scale transformation, indicating that it does not need" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 304, + 171, + 504, + 285 + ], + "blocks": [ + { + "bbox": [ + 110, + 129, + 501, + 163 + ], + "lines": [ + { + "bbox": [ + 110, + 129, + 501, + 163 + ], + "spans": [ + { + "bbox": [ + 110, + 129, + 501, + 163 + ], + "type": "text", + "content": "Finding 4. Color clues are indeed leveraged more or less by VLMs in most of the tasks in COLORBENCH. However, in color illusion and mimicry tasks, colors might mislead VLMs to wrong answers, and converting colorful images to grayscale can improve the accuracy." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 304, + 171, + 504, + 285 + ], + "lines": [ + { + "bbox": [ + 304, + 171, + 504, + 285 + ], + "spans": [ + { + "bbox": [ + 304, + 171, + 504, + 285 + ], + "type": "image", + "image_path": "7e9abdefdba11426ba75da60ea1aa91fa1fb21de3146efef9bebcea1409ccc4f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 286, + 506, + 407 + ], + "lines": [ + { + "bbox": [ + 302, + 286, + 506, + 407 + ], + "spans": [ + { + "bbox": [ + 302, + 286, + 506, + 407 + ], + "type": "text", + "content": "Figure 5: The percentage of change in accuracy (y-axis) by converting colorful images to grayscale in each COLORBENCH task (x-axis). Each violin plot visualizes the distribution over all VLMs. Higher (lower) percentage indicates that VLMs rely more (less) on color clues for the task. Positive (negative) percentage indicates degradation (improvement) on grayscale images. Color clues are indeed more or less leveraged by VLMs in most tasks but they might mislead VLMs (illusion & mimicry)." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 410, + 504, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 410, + 504, + 433 + ], + "spans": [ + { + "bbox": [ + 104, + 410, + 504, + 433 + ], + "type": "text", + "content": "color clues for the task, and colors might even mislead VLM's judgment. Lower negative values represent the severe harm the color can have on the task." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 437, + 506, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 437, + 506, + 558 + ], + "spans": [ + { + "bbox": [ + 104, + 437, + 506, + 558 + ], + "type": "text", + "content": "The accuracy changing ratio distributions across all VLMs and tasks are presented in Figure 5 as the violin plot. As shown in the figure, for most of the tasks, the ratios of VLMs are above 0, indicating that VLMs indeed leverage color clues to correctly solve the tasks; removing the color directly harms the original accuracies dramatically. However, when it comes to Color Illusion and Color Mimicry, the majority of the changing ratios are below 0, which means that VLMs can get better accuracies when all the color information is removed. This phenomenon is reasonable as the colors on both of these two tasks are more likely serving as the misleading factors. In the meantime, for the Color Counting and Color Blindness tasks, almost half the accuracies increase and half decrease, indicating that the color clues might not be so significant in this task, thus, some of the models can find other ways to get the answer. We also investigate the correlation between accuracy changing ratios and model sizes, while no significant correlation can be concluded." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 573, + 345, + 586 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 573, + 345, + 586 + ], + "spans": [ + { + "bbox": [ + 104, + 573, + 345, + 586 + ], + "type": "text", + "content": "4 Conclusion, Limitation, and Future Works" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 597, + 507, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 597, + 507, + 719 + ], + "spans": [ + { + "bbox": [ + 104, + 597, + 507, + 719 + ], + "type": "text", + "content": "In this paper, we introduce COLORBENCH, the first benchmark designed to comprehensively evaluate the color understanding capabilities of VLMs, including Perception, Reasoning, and Robustness. After evaluating 32 widely used VLMs on our benchmark, several undiscovered observations are revealed by us. These observations emphasize the need for more sophisticated model architectures that integrate deeper color reasoning capabilities. To ensure high-quality and reliable annotations, COLORBENCH relies on manual data collection, annotation, and assessment across most domains. While this guarantees consistency, it inevitably limits dataset scale, style diversity, and category coverage. As future work, we aim to develop a trustworthy automated data collection pipeline and expand COLORBENCH to larger-scale, more diverse tasks involving complex interplays of color with texture, shape, and spatial relationships. Furthermore, investigating the impact of different visual encoders and language models could further elucidate the pathways through which VLMs process color information." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "spans": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 89, + 505, + 722 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 111, + 89, + 505, + 120 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 89, + 505, + 120 + ], + "spans": [ + { + "bbox": [ + 111, + 89, + 505, + 120 + ], + "type": "text", + "content": "[1] Basit Alawode, Iyyakutti Iyappan Ganapathi, Sajid Javed, Naoufel Werghi, Mohammed Bennamoun, and Arif Mahmood. Aquaticclip: A vision-language foundation model for underwater scene analysis. arXiv preprint arXiv:2502.01785, 2025." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 127, + 505, + 168 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 127, + 505, + 168 + ], + "spans": [ + { + "bbox": [ + 111, + 127, + 505, + 168 + ], + "type": "text", + "content": "[2] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 175, + 504, + 197 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 175, + 504, + 197 + ], + "spans": [ + { + "bbox": [ + 111, + 175, + 504, + 197 + ], + "type": "text", + "content": "[3] Jirayu Burapacheep, Ishan Gaur, Agam Bhatia, and Tristan Thrush. Colorswap: A color and word order dataset for multimodal evaluation. arXiv preprint arXiv:2402.04492, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 204, + 505, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 204, + 505, + 235 + ], + "spans": [ + { + "bbox": [ + 111, + 204, + 505, + 235 + ], + "type": "text", + "content": "[4] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 243, + 505, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 243, + 505, + 283 + ], + "spans": [ + { + "bbox": [ + 111, + 243, + 505, + 283 + ], + "type": "text", + "content": "[5] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24185–24198, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 111, + 291, + 504, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 291, + 504, + 312 + ], + "spans": [ + { + "bbox": [ + 111, + 291, + 504, + 312 + ], + "type": "text", + "content": "[6] Kanjar De and Marius Pedersen. Impact of colour on robustness of deep neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 21-30, 2021." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 111, + 319, + 284, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 319, + 284, + 331 + ], + "spans": [ + { + "bbox": [ + 111, + 319, + 284, + 331 + ], + "type": "text", + "content": "[7] Google DeepMind. Gemini 2.0 flash, 2025." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 338, + 505, + 358 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 338, + 505, + 358 + ], + "spans": [ + { + "bbox": [ + 111, + 338, + 505, + 358 + ], + "type": "text", + "content": "[8] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4829-4837, 2016." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 111, + 365, + 505, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 365, + 505, + 407 + ], + "spans": [ + { + "bbox": [ + 111, + 365, + 505, + 407 + ], + "type": "text", + "content": "[9] Hao Fei, Yuan Yao, Zhuosheng Zhang, Fuxiao Liu, Ao Zhang, and Tat-Seng Chua. From multimodal llm to human-level ai: Modality, instruction, reasoning, efficiency and beyond. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024): Tutorial Summaries, pages 1-8, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 414, + 505, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 414, + 505, + 445 + ], + "spans": [ + { + "bbox": [ + 106, + 414, + 505, + 445 + ], + "type": "text", + "content": "[10] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 453, + 504, + 473 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 453, + 504, + 473 + ], + "spans": [ + { + "bbox": [ + 106, + 453, + 504, + 473 + ], + "type": "text", + "content": "[11] Karl R. Gegenfurtner and Jochem Rieger. Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology, 10(13):805-808, 2000." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 481, + 504, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 481, + 504, + 511 + ], + "spans": [ + { + "bbox": [ + 106, + 481, + 504, + 511 + ], + "type": "text", + "content": "[12] Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 106, + 519, + 505, + 561 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 519, + 505, + 561 + ], + "spans": [ + { + "bbox": [ + 106, + 519, + 505, + 561 + ], + "type": "text", + "content": "[13] Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, et al. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14375-14385, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 106, + 567, + 504, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 567, + 504, + 588 + ], + "spans": [ + { + "bbox": [ + 106, + 567, + 504, + 588 + ], + "type": "text", + "content": "[14] Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem. Grit: General robust image task benchmark. arXiv preprint arXiv:2204.13653, 2022." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 106, + 596, + 504, + 616 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 596, + 504, + 616 + ], + "spans": [ + { + "bbox": [ + 106, + 596, + 504, + 616 + ], + "type": "text", + "content": "[15] Shuai He, Anlong Ming, Li Yaqi, Sun Jinyuan, Zheng ShunTian, and Ma Huadong. Thinking image color aesthetics assessment: Models, datasets and benchmarks. ICCV, 2023." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 106, + 624, + 505, + 646 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 624, + 505, + 646 + ], + "spans": [ + { + "bbox": [ + 106, + 624, + 505, + 646 + ], + "type": "text", + "content": "[16] Nam Hyeon-Woo, Moon Ye-Bin, Wonseok Choi, Lee Hyun, and Tae-Hyun Oh. Vlm's eye examination: Instruct and inspect visual competency of vision language models. arXiv preprint arXiv:2409.14759, 2024." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 106, + 652, + 505, + 684 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 652, + 505, + 684 + ], + "spans": [ + { + "bbox": [ + 106, + 652, + 505, + 684 + ], + "type": "text", + "content": "[17] Md Farhan Ishmam, Ishmam Tashdeed, Talukder Asir Saadat, Md Hamjajul Ashmafee, Abu Raihan Mostofa Kamal, and Md Azam Hossain. Visual robustness benchmark for visual question answering (vqa). arXiv preprint arXiv:2407.03386, 2024." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 106, + 691, + 505, + 722 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 691, + 505, + 722 + ], + "spans": [ + { + "bbox": [ + 106, + 691, + 505, + 722 + ], + "type": "text", + "content": "[18] Ali Jahanian, Shaiyan Keshvari, SVN Vishwanathan, and Jan P Allebach. Colors-messengers of concepts: Visual design mining for learning color semantics. ACM Transactions on Computer-Human Interaction (TOCHI), 24(1):1-39, 2017." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 721 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "type": "text", + "content": "[19] Johannes Jakubik, Benedikt Blumenstiel, and Clive Tinashe Marimo. Ms-clip: Multi-spectral vision language learning for earth observation. In American Geophysical Union Fall Meeting, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 102, + 505, + 133 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 102, + 505, + 133 + ], + "spans": [ + { + "bbox": [ + 106, + 102, + 505, + 133 + ], + "type": "text", + "content": "[20] Jayendra Kantipudi, Shiv Ram Dubey, and Soumendu Chakraborty. Color channel perturbation attacks for fooling convolutional neural networks and a defense against such attacks. IEEE Transactions on Artificial Intelligence, 1(2):181-191, 2020." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 140, + 505, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 140, + 505, + 171 + ], + "spans": [ + { + "bbox": [ + 107, + 140, + 505, + 171 + ], + "type": "text", + "content": "[21] Tony Lee, Haoqin Tu, Chi Heem Wong, Wenhao Zheng, Yiyang Zhou, Yifan Mai, Josselin Somerville Roberts, Michihiro Yasunaga, Huaxiu Yao, Cihang Xie, et al. Vhelm: A holistic evaluation of vision language models. arXiv preprint arXiv:2410.07112, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 178, + 504, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 178, + 504, + 201 + ], + "spans": [ + { + "bbox": [ + 107, + 178, + 504, + 201 + ], + "type": "text", + "content": "[22] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 208, + 505, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 208, + 505, + 239 + ], + "spans": [ + { + "bbox": [ + 107, + 208, + 505, + 239 + ], + "type": "text", + "content": "[23] Baiqi Li, Zhiqiu Lin, Wenxuan Peng, Jean de Dieu Nyandwi, Daniel Jiang, Zixian Ma, Simran Khanuja, Ranjay Krishna, Graham Neubig, and Deva Ramanan. Naturalbench: Evaluating vision-language models on natural adversarial samples. arXiv preprint arXiv:2410.14669, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 247, + 505, + 268 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 247, + 505, + 268 + ], + "spans": [ + { + "bbox": [ + 107, + 247, + 505, + 268 + ], + "type": "text", + "content": "[24] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: Easy visual task transfer, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 275, + 505, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 275, + 505, + 306 + ], + "spans": [ + { + "bbox": [ + 107, + 275, + 505, + 306 + ], + "type": "text", + "content": "[25] Jian Li, Weiheng Lu, Hao Fei, Meng Luo, Ming Dai, Min Xia, Yizhang Jin, Zhenye Gan, Ding Qi, Chaoyou Fu, Ying Tai, Wankou Yang, Yabiao Wang, and Chengjie Wang. A survey on benchmarks of multimodal large language models, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 314, + 505, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 314, + 505, + 346 + ], + "spans": [ + { + "bbox": [ + 107, + 314, + 505, + 346 + ], + "type": "text", + "content": "[26] Ming Li, Chenguang Wang, Yijun Liang, Xiyao Wang, Yuhang Zhou, Xiyang Wu, Yuqing Zhang, Ruiyi Zhang, and Tianyi Zhou. Caughtcheating: Is your mllm a good cheating detective? exploring the boundary of visual perception and reasoning. arXiv preprint arXiv:2507.00045, 2025." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 353, + 505, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 353, + 505, + 374 + ], + "spans": [ + { + "bbox": [ + 107, + 353, + 505, + 374 + ], + "type": "text", + "content": "[27] Ming Li, Ruiyi Zhang, Jian Chen, Jiumiang Gu, Yufan Zhou, Franck Dernoncourt, Wanrong Zhu, Tianyi Zhou, and Tong Sun. Towards visual text grounding of multimodal large language model, 2025." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 381, + 505, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 381, + 505, + 403 + ], + "spans": [ + { + "bbox": [ + 107, + 381, + 505, + 403 + ], + "type": "text", + "content": "[28] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 411, + 505, + 441 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 411, + 505, + 441 + ], + "spans": [ + { + "bbox": [ + 107, + 411, + 505, + 441 + ], + "type": "text", + "content": "[29] Zongxia Li, Xiyang Wu, Hongyang Du, Huy Nghiem, and Guangyao Shi. Benchmark evaluations, applications, and challenges of large vision language models: A survey. arXiv preprint arXiv:2501.02189, 2025." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 449, + 505, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 449, + 505, + 491 + ], + "spans": [ + { + "bbox": [ + 107, + 449, + 505, + 491 + ], + "type": "text", + "content": "[30] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll'ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740-755. Springer, 2014." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 498, + 505, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 498, + 505, + 520 + ], + "spans": [ + { + "bbox": [ + 107, + 498, + 505, + 520 + ], + "type": "text", + "content": "[31] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava- next: Improved reasoning,OCR,and world knowledge,2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 527, + 505, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 527, + 505, + 559 + ], + "spans": [ + { + "bbox": [ + 107, + 527, + 505, + 559 + ], + "type": "text", + "content": "[32] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision, pages 216-233. Springer, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 566, + 505, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 566, + 505, + 587 + ], + "spans": [ + { + "bbox": [ + 107, + 566, + 505, + 587 + ], + "type": "text", + "content": "[33] Lingjun Mao, Zineng Tang, and Alane Suhr. Evaluating model perception of color illusions in photorealistic scenes. arXiv preprint arXiv:2412.06184, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 594, + 505, + 616 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 594, + 505, + 616 + ], + "spans": [ + { + "bbox": [ + 107, + 594, + 505, + 616 + ], + "type": "text", + "content": "[34] Daniela Mapelli and Marlene Behrmann. The role of color in object recognition: Evidence from visual agnosia. Neurocase, 3(4):237-247, 1997." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 623, + 505, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 623, + 505, + 645 + ], + "spans": [ + { + "bbox": [ + 107, + 623, + 505, + 645 + ], + "type": "text", + "content": "[35] OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and etc. Gpt-4o system card, 2024." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 107, + 652, + 505, + 683 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 652, + 505, + 683 + ], + "spans": [ + { + "bbox": [ + 107, + 652, + 505, + 683 + ], + "type": "text", + "content": "[36] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 691, + 505, + 721 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 691, + 505, + 721 + ], + "spans": [ + { + "bbox": [ + 107, + 691, + 505, + 721 + ], + "type": "text", + "content": "[37] Ragini Rathore, Zachary Leggon, Laurent Lessard, and Karen B Schloss. Estimating color-concept associations from image statistics. IEEE Transactions on Visualization and Computer Graphics, 26(1): 1226-1235, 2019." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 641 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 105 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 105 + ], + "type": "text", + "content": "[38] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 110, + 505, + 133 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 110, + 505, + 133 + ], + "spans": [ + { + "bbox": [ + 106, + 110, + 505, + 133 + ], + "type": "text", + "content": "[39] Ahnaf Mozib Samin, M Firoz Ahmed, and Md Mushtaq Shahriyar Rafee. Colorfoil: Investigating color blindness in large vision and language models. arXiv preprint arXiv:2405.11685, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 138, + 506, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 138, + 506, + 170 + ], + "spans": [ + { + "bbox": [ + 107, + 138, + 506, + 170 + ], + "type": "text", + "content": "[40] Haz Sameen Shahgir, Khondker Salman Sayeed, Abhik Bhattacharjee, Wasi Uddin Ahmad, Yue Dong, and Rifat Shahriyar. Illusionvqa: A challenging optical illusion dataset for vision language models. arXiv preprint arXiv:2403.15952, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 176, + 505, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 176, + 505, + 207 + ], + "spans": [ + { + "bbox": [ + 106, + 176, + 505, + 207 + ], + "type": "text", + "content": "[41] Min Shi, Fuxiao Liu, Shihao Wang, Shijia Liao, Subhashree Radhakrishnan, De-An Huang, Hongxu Yin, Karan Sapra, Yaser Yacoob, Humphrey Shi, et al. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 214, + 506, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 214, + 506, + 245 + ], + "spans": [ + { + "bbox": [ + 107, + 214, + 506, + 245 + ], + "type": "text", + "content": "[42] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 251, + 504, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 251, + 504, + 283 + ], + "spans": [ + { + "bbox": [ + 107, + 251, + 504, + 283 + ], + "type": "text", + "content": "[43] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 290, + 505, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 290, + 505, + 320 + ], + "spans": [ + { + "bbox": [ + 107, + 290, + 505, + 320 + ], + "type": "text", + "content": "[44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 327, + 504, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 327, + 504, + 350 + ], + "spans": [ + { + "bbox": [ + 107, + 327, + 504, + 350 + ], + "type": "text", + "content": "[45] Hanna-Sophia Widhoelzl and Ece Takmaz. Decoding emotions in abstract art: Cognitive plausibility of clip in recognizing color-emotion associations. arXiv preprint arXiv:2405.06319, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 355, + 505, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 355, + 505, + 386 + ], + "spans": [ + { + "bbox": [ + 107, + 355, + 505, + 386 + ], + "type": "text", + "content": "[46] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 393, + 505, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 393, + 505, + 434 + ], + "spans": [ + { + "bbox": [ + 107, + 393, + 505, + 434 + ], + "type": "text", + "content": "[47] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 441, + 504, + 463 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 441, + 504, + 463 + ], + "spans": [ + { + "bbox": [ + 107, + 441, + 504, + 463 + ], + "type": "text", + "content": "[48] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 469, + 506, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 469, + 506, + 499 + ], + "spans": [ + { + "bbox": [ + 107, + 469, + 506, + 499 + ], + "type": "text", + "content": "[49] Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, and Filip Ilievski. Mllms know where to look: Training-free perception of small visual details with multimodal llms. arXiv preprint arXiv:2502.17422, 2025." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 506, + 506, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 506, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 107, + 506, + 506, + 529 + ], + "type": "text", + "content": "[50] Le Zhang, Rabiul Awal, and Aishwarya Agrawal. Contrasting intra-modal and ranking cross-modal hard negatives to enhance visio-linguistic fine-grained understanding. arXiv preprint arXiv:2306.08832, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 535, + 506, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 535, + 506, + 566 + ], + "spans": [ + { + "bbox": [ + 107, + 535, + 506, + 566 + ], + "type": "text", + "content": "[51] Tiancheng Zhao, Tianqi Zhang, Mingwei Zhu, Haozhan Shen, Kyusong Lee, Xiaopeng Lu, and Jianwei Yin. VI-checklist: Evaluating pre-trained vision-language models with objects, attributes and relations. arXiv preprint arXiv:2207.00221, 2022." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 573, + 506, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 573, + 506, + 604 + ], + "spans": [ + { + "bbox": [ + 107, + 573, + 506, + 604 + ], + "type": "text", + "content": "[52] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 633-641, 2017." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 610, + 506, + 641 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 610, + 506, + 641 + ], + "spans": [ + { + "bbox": [ + 107, + 610, + 506, + 641 + ], + "type": "text", + "content": "[53] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision, 127:302-321, 2019." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 71, + 269, + 86 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 71, + 269, + 86 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 269, + 86 + ], + "type": "text", + "content": "Table of Contents for Appendix" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 99, + 505, + 110 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 99, + 505, + 110 + ], + "spans": [ + { + "bbox": [ + 106, + 99, + 505, + 110 + ], + "type": "text", + "content": "A Related Works 14" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 120, + 114, + 505, + 143 + ], + "type": "list", + "angle": 0, + "index": 4, + "blocks": [ + { + "bbox": [ + 120, + 114, + 505, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 120, + 114, + 505, + 126 + ], + "spans": [ + { + "bbox": [ + 120, + 114, + 505, + 126 + ], + "type": "text", + "content": "A.1 VLM Benchmarks 14" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 121, + 131, + 504, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 131, + 504, + 143 + ], + "spans": [ + { + "bbox": [ + 121, + 131, + 504, + 143 + ], + "type": "text", + "content": "A.2 Color Evaluation 14" + } + ] + } + ], + "index": 3 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 106, + 158, + 505, + 169 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 158, + 505, + 169 + ], + "spans": [ + { + "bbox": [ + 106, + 158, + 505, + 169 + ], + "type": "text", + "content": "B Data Sources 14" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 106, + 184, + 505, + 195 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 184, + 505, + 195 + ], + "spans": [ + { + "bbox": [ + 106, + 184, + 505, + 195 + ], + "type": "text", + "content": "C Detailed Generation Process for Robustness 15" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 106, + 211, + 505, + 222 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 211, + 505, + 222 + ], + "spans": [ + { + "bbox": [ + 106, + 211, + 505, + 222 + ], + "type": "text", + "content": "D COLORBENCH Categories and Questions 15" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 237, + 505, + 248 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 237, + 505, + 248 + ], + "spans": [ + { + "bbox": [ + 106, + 237, + 505, + 248 + ], + "type": "text", + "content": "E Implementation Details 19" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 106, + 263, + 505, + 274 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 263, + 505, + 274 + ], + "spans": [ + { + "bbox": [ + 106, + 263, + 505, + 274 + ], + "type": "text", + "content": "F Evaluation Prompts 19" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 289, + 505, + 300 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 289, + 505, + 300 + ], + "spans": [ + { + "bbox": [ + 106, + 289, + 505, + 300 + ], + "type": "text", + "content": "G Human Evaluation 19" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 316, + 505, + 327 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 316, + 505, + 327 + ], + "spans": [ + { + "bbox": [ + 106, + 316, + 505, + 327 + ], + "type": "text", + "content": "H Reasoning Models with Thinking Process 19" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 342, + 505, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 342, + 505, + 354 + ], + "spans": [ + { + "bbox": [ + 106, + 342, + 505, + 354 + ], + "type": "text", + "content": "I Qualitative Analysis of Failure Cases 20" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 106, + 369, + 505, + 380 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 369, + 505, + 380 + ], + "spans": [ + { + "bbox": [ + 106, + 369, + 505, + 380 + ], + "type": "text", + "content": "J Effect of Different Modalities 24" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 106, + 395, + 505, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 395, + 505, + 407 + ], + "spans": [ + { + "bbox": [ + 106, + 395, + 505, + 407 + ], + "type": "text", + "content": "K Fine-tuning Experiments on ColorBench 24" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 106, + 422, + 505, + 432 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 422, + 505, + 432 + ], + "spans": [ + { + "bbox": [ + 106, + 422, + 505, + 432 + ], + "type": "text", + "content": "L More Visualizations 25" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 121, + 437, + 505, + 482 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 121, + 437, + 505, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 437, + 505, + 449 + ], + "spans": [ + { + "bbox": [ + 121, + 437, + 505, + 449 + ], + "type": "text", + "content": "L.1 VLM Size & Model Performance for Each Task 25" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 121, + 454, + 505, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 454, + 505, + 465 + ], + "spans": [ + { + "bbox": [ + 121, + 454, + 505, + 465 + ], + "type": "text", + "content": "L.2 Vision Size & Model Performance for Each Task 27" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 121, + 470, + 505, + 482 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 470, + 505, + 482 + ], + "spans": [ + { + "bbox": [ + 121, + 470, + 505, + 482 + ], + "type": "text", + "content": "L.3 Performance for Each Model Family on Each Task 28" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 106, + 497, + 505, + 509 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 497, + 505, + 509 + ], + "spans": [ + { + "bbox": [ + 106, + 497, + 505, + 509 + ], + "type": "text", + "content": "M Samples Cases 30" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 121, + 513, + 505, + 590 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 121, + 513, + 505, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 513, + 505, + 525 + ], + "spans": [ + { + "bbox": [ + 121, + 513, + 505, + 525 + ], + "type": "text", + "content": "M.1 Effect of CoT 30" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 121, + 529, + 505, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 529, + 505, + 541 + ], + "spans": [ + { + "bbox": [ + 121, + 529, + 505, + 541 + ], + "type": "text", + "content": "M.2 Effect of Grayscale 35" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 121, + 545, + 505, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 545, + 505, + 557 + ], + "spans": [ + { + "bbox": [ + 121, + 545, + 505, + 557 + ], + "type": "text", + "content": "M.3 Failure with LLM and Vision 36" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 121, + 563, + 505, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 563, + 505, + 574 + ], + "spans": [ + { + "bbox": [ + 121, + 563, + 505, + 574 + ], + "type": "text", + "content": "M.4 Easy Cases 37" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 121, + 578, + 505, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 578, + 505, + 590 + ], + "spans": [ + { + "bbox": [ + 121, + 578, + 505, + 590 + ], + "type": "text", + "content": "M.5 Difficult Cases 39" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 107, + 71, + 205, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 71, + 205, + 83 + ], + "spans": [ + { + "bbox": [ + 107, + 71, + 205, + 83 + ], + "type": "text", + "content": "A Related Works" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 107, + 96, + 212, + 108 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 96, + 212, + 108 + ], + "spans": [ + { + "bbox": [ + 107, + 96, + 212, + 108 + ], + "type": "text", + "content": "A.1 VLM Benchmarks" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 118, + 506, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 118, + 506, + 304 + ], + "spans": [ + { + "bbox": [ + 107, + 118, + 506, + 304 + ], + "type": "text", + "content": "With the rapid advancements in Vision-Language Models (VLMs) [9], numerous benchmarks have emerged to systematically evaluate VLM capabilities across diverse dimensions [29]. These benchmarks generally fall into two categories: text-centric and vision-centric evaluations, each designed to assess distinct multimodal competencies. Text-centric benchmarks primarily measure commonsense knowledge, reasoning, and complex problem-solving capabilities, exemplified by tasks in MMMU [47] and NaturalBench [23]. Conversely, vision-centric benchmarks focus on visual perception and reasoning (MMBench [32] and MME [10]), and robustness to visual perturbations (Grit [14] and Visual Robustness [17]). Furthermore, several benchmarks have extended their scope to evaluate specialized visual tasks, such as spatial relationship comprehension (SEED-Bench [22] and MM-Vet [46]), chart and map understanding (MMSTAR [4] and MuirBench [43]), visual grounding (Flickr30k [36] and TRIG [27]) and the detection and understanding of visual hallucinations (POPE [28] and HallusionBench [13]). However, despite the extensive scope covered by existing VLM benchmarks, none currently provide an integrated evaluation that simultaneously assesses visual perception, reasoning, and robustness within a unified framework. Moreover, although certain benchmarks [32, 10] have incorporated color-related questions, these have typically addressed basic color perception and recognition, neglecting deeper assessments of reasoning and robustness associated with color understanding." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 319, + 205, + 330 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 319, + 205, + 330 + ], + "spans": [ + { + "bbox": [ + 107, + 319, + 205, + 330 + ], + "type": "text", + "content": "A.2 Color Evaluation" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 340, + 506, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 340, + 506, + 503 + ], + "spans": [ + { + "bbox": [ + 107, + 340, + 506, + 503 + ], + "type": "text", + "content": "Color understanding is increasingly recognized as a crucial aspect of Vision-Language Models' ability to perceive and interpret visual content. Limited studies have explored how color information influences model performance on specific tasks. Some studies [51, 50] explore the understanding of color by replacing color-related words in textual inputs to evaluate the models' ability to handle color-specific information. More recent research [16, 21] focuses on assessing fine-grained color discrimination by asking models to distinguish subtle color differences in visual inputs. Samin et al. [39] introduced color-related foils to test VLMs' capacity to cognize basic colors like red, white, and green, particularly in contexts requiring attention to subtle cues. Additionally, Burapacheep et al. [3] developed a benchmark dataset to evaluate and enhance compositional color comprehension in VLMs, emphasizing tasks where understanding minimal color relationships is essential. IllusionVQA [40] evaluates model perception of color illusions in photorealistic scenes. While these works have addressed isolated aspects of color understanding, none have provided a holistic assessment framework. In contrast to these previous works, our study establishes the first comprehensive and specialized benchmark for evaluating the color-related abilities of VLMs, offering a quantitative, automated approach to further this area of research." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 522, + 195, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 522, + 195, + 534 + ], + "spans": [ + { + "bbox": [ + 107, + 522, + 195, + 534 + ], + "type": "text", + "content": "B Data Sources" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 548, + 503, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 548, + 503, + 570 + ], + "spans": [ + { + "bbox": [ + 107, + 548, + 503, + 570 + ], + "type": "text", + "content": "We conduct COLORBENCH from multiple sources, including website sources, publicly available benchmarks, and generated images. The detailed sources are included in Table 5." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 187, + 602, + 421, + 720 + ], + "blocks": [ + { + "bbox": [ + 230, + 591, + 380, + 601 + ], + "lines": [ + { + "bbox": [ + 230, + 591, + 380, + 601 + ], + "spans": [ + { + "bbox": [ + 230, + 591, + 380, + 601 + ], + "type": "text", + "content": "Table 5: Data sources for each task." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 187, + 602, + 421, + 720 + ], + "lines": [ + { + "bbox": [ + 187, + 602, + 421, + 720 + ], + "spans": [ + { + "bbox": [ + 187, + 602, + 421, + 720 + ], + "type": "table", + "html": "
CategoryData Source
C'RecognitionWebsite, ICAA17K [15]
C'RecognitionWebsite, ICAA17K [15]
C'ExtractionSynthetic Data
C'ProportionWebsite, Synthetic Data
C'ComparisonWebsite
C'CountingWebsite, Synthetic Data
C'OuntingWebsite, ADA20K [52, 53], COCO2017 [30]
C'MimicryWebsite, IllusionVQA[40], RCID[33]
C'BlindnessSynthetic Data
C'RobustCV-Bench[42]
", + "image_path": "2dd1bfc5751632f7ce11efe4e26cf20e287a4f3b05c3a4b28555ebfedf64c283.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 301, + 742, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 742, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 301, + 742, + 310, + 750 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 126, + 88, + 481, + 169 + ], + "blocks": [ + { + "bbox": [ + 240, + 78, + 369, + 88 + ], + "lines": [ + { + "bbox": [ + 240, + 78, + 369, + 88 + ], + "spans": [ + { + "bbox": [ + 240, + 78, + 369, + 88 + ], + "type": "text", + "content": "Table 6: Recoloring strategies." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 126, + 88, + 481, + 169 + ], + "lines": [ + { + "bbox": [ + 126, + 88, + 481, + 169 + ], + "spans": [ + { + "bbox": [ + 126, + 88, + 481, + 169 + ], + "type": "table", + "html": "
StrategyEditing RegionPurpose
Entire ImageWhole imageAssesses the model's robustness to global color shifts
Target SegmentSegment containing the object referenced in the questionEvaluates the model's sensitivity to task-relevant color changes
Largest SegmentThe largest segment that is irrelevant to the questionTests whether changes in dominant but unrelated regions affect model predictions
", + "image_path": "02f9a5ca0b385b537a0fcb5b31aec27978d46e340a9943aae0e3b963a4a2fd0c.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 176, + 353, + 189 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 176, + 353, + 189 + ], + "spans": [ + { + "bbox": [ + 105, + 176, + 353, + 189 + ], + "type": "text", + "content": "C Detailed Generation Process for Robustness" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 201, + 506, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 201, + 506, + 354 + ], + "spans": [ + { + "bbox": [ + 104, + 201, + 506, + 354 + ], + "type": "text", + "content": "For the Color Robustness, we evaluate the consistency of VLMs when faced with instances that differ only in the color of the visual input. To systematically assess this effect, we define 3 recoloring strategies that determine which part of the image is altered: i) Target Segment, ii) Largest Segment, and iii) Entire Image. As mentioned in Table 6, Target Segment strategy recolors only the segment containing the object referenced in the question. This strategy ensures that the modification directly affects the model's perception of task-relevant content. Largest Segment strategy alters the color of the largest segment that is irrelevant to the question, testing whether models are distracted by dominant but unrelated visual changes. In contrast, Entire Image strategy applies a global color shift to evaluate the model's sensitivity to overall color variations. As summarized in Table 6, the first two strategies introduce localized modifications, while the third assesses robustness to broader image-wide color changes. Importantly, only color attributes are altered without modifying object shapes or contextual elements, which preserves the overall realism of the image. By incorporating both task-relevant and irrelevant edits, our benchmark provides a comprehensive evaluation of VLMs' ability to handle color perturbations across different contexts." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "spans": [ + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "text", + "content": "While generating color variations, we derive seed images from CV-Bench [42], a publicly available benchmark. For each seed image, as shown in Figure 3, we first employ a Grounded Segmentation Model (GAM) [38] to extract segments and their corresponding labels. We then apply the predefined recoloring strategies to determine the editing region. Once the editing region is determined, we modify the color of the corresponding region. In HSV color space, since Saturation and Value control the purity or brightness of the color, and only Hue controls the color of the part, we only adjust the Hue value in the HSV color space. Specifically, we shift the Hue by " + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "inline_equation", + "content": "90^{\\circ}" + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "inline_equation", + "content": "270^{\\circ}" + }, + { + "bbox": [ + 104, + 359, + 506, + 545 + ], + "type": "text", + "content": ". These three values ensure that the color manipulations cover significant perceptual differences across the color spectrum. This process produces nine variations per seed image, covering different strategies and degrees of color change to enable a comprehensive robustness assessment. To ensure interpretability, human experts filter out unnatural or negligible modifications, resulting in a final selection of 493 seed images for robustness evaluation. Additionally, we select questions that are color-invariant, which means answers remain valid regardless of whether the recoloring appears fully natural. This design choice isolates color variation as the sole variable of interest and prevents confounding effects from semantic or contextual changes. Through these steps, we evaluate whether VLMs rely excessively on color information and whether they maintain consistency in their predictions despite substantial color shifts." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 561, + 342, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 561, + 342, + 574 + ], + "spans": [ + { + "bbox": [ + 105, + 561, + 342, + 574 + ], + "type": "text", + "content": "D COLORBENCH Categories and Questions" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 585, + 505, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 585, + 505, + 619 + ], + "spans": [ + { + "bbox": [ + 104, + 585, + 505, + 619 + ], + "type": "text", + "content": "Table 7 provides a detailed description of each task, alongside representative figures and sample questions that effectively demonstrate the specific capabilities being tested. Cases are provided for each task in Figure 6 to 16." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 126, + 100, + 482, + 441 + ], + "blocks": [ + { + "bbox": [ + 185, + 82, + 425, + 94 + ], + "lines": [ + { + "bbox": [ + 185, + 82, + 425, + 94 + ], + "spans": [ + { + "bbox": [ + 185, + 82, + 425, + 94 + ], + "type": "text", + "content": "Table 7: Task and question definition in COLORBENCH." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 126, + 100, + 482, + 441 + ], + "lines": [ + { + "bbox": [ + 126, + 100, + 482, + 441 + ], + "spans": [ + { + "bbox": [ + 126, + 100, + 482, + 441 + ], + "type": "table", + "html": "
Task#Sample CaseDescriptionSample Questions
PerceptionColor Recognition76Figure 6Ask for the color of a specific object or determine if a particular color is present in the image.What is the color of object in this image? What color does not exist in this image?
Color Extraction96Figure 7Extract the color code value (e.g., RGB, HSV, or HEX) from a single color in the image.What is the HSV value of the given color in the image? What is the RGB value of the given color in the image?
Object Recognition77Figure 8Identify objects in the image that match a specified color noted in the text input.What object has a color of pink in this image?
ReasoningColor Proportion80Figure 9Estimate the relative area occupied by a specified color in the image.What is the dominant color in this image? What is the closest to the proportion of the red color in the image?
Color Comparison101Figure 10Distinguish among multiple colors present in the image to assess overall tones and shades.Which photo is warmer in overall color? Which object has a darker color in the image?
Color Counting102Figure 11Identify the number of unique colors present in the image.How many different colors are in this image?
Object Counting103Figure 12Count the number of objects of a specified color present in the image.How many objects with green color are in this image?
Color Illusion93Figure 13Assess and compare colors in potential illusionary settings within the image.Do two objects have the same color?
Color Mimicry70Figure 14Detect objects that are camouflaged within their surroundings, where color is a key deceptive element.How many animals are in this image?
Color Blindness157Figure 15Recognize numbers or text that are embedded in color patterns, often used in tests for color vision.What is the number in the center of the image?
", + "image_path": "c37486eaabc8fc97ed4c652a07c5ed8f34be28cbd367fb740ec38f9e3701d520.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 129, + 477, + 203, + 488 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 477, + 203, + 488 + ], + "spans": [ + { + "bbox": [ + 129, + 477, + 203, + 488 + ], + "type": "text", + "content": "Color Recognition" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 127, + 495, + 194, + 559 + ], + "blocks": [ + { + "bbox": [ + 127, + 495, + 194, + 559 + ], + "lines": [ + { + "bbox": [ + 127, + 495, + 194, + 559 + ], + "spans": [ + { + "bbox": [ + 127, + 495, + 194, + 559 + ], + "type": "image", + "image_path": "847c4f60e625d3da8a95598b72a86020f1499a6eb7fb0561c7faefa861ffbce6.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 211, + 567, + 399, + 578 + ], + "lines": [ + { + "bbox": [ + 211, + 567, + 399, + 578 + ], + "spans": [ + { + "bbox": [ + 211, + 567, + 399, + 578 + ], + "type": "text", + "content": "Figure 6: Cases for Color Recognition Task." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 196, + 499, + 290, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 499, + 290, + 506 + ], + "spans": [ + { + "bbox": [ + 196, + 499, + 290, + 506 + ], + "type": "text", + "content": "What is the color of the banana in this" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 196, + 508, + 215, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 508, + 215, + 515 + ], + "spans": [ + { + "bbox": [ + 196, + 508, + 215, + 515 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 196, + 518, + 213, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 518, + 213, + 525 + ], + "spans": [ + { + "bbox": [ + 196, + 518, + 213, + 525 + ], + "type": "text", + "content": "A: Red" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 196, + 527, + 218, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 527, + 218, + 533 + ], + "spans": [ + { + "bbox": [ + 196, + 527, + 218, + 533 + ], + "type": "text", + "content": "C:Yellow" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 196, + 536, + 246, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 536, + 246, + 543 + ], + "spans": [ + { + "bbox": [ + 196, + 536, + 246, + 543 + ], + "type": "text", + "content": "E: None of the above" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 196, + 546, + 214, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 546, + 214, + 552 + ], + "spans": [ + { + "bbox": [ + 196, + 546, + 214, + 552 + ], + "type": "text", + "content": "Ans: E" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 250, + 518, + 255, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 518, + 255, + 524 + ], + "spans": [ + { + "bbox": [ + 250, + 518, + 255, + 524 + ], + "type": "text", + "content": "en" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 250, + 527, + 255, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 527, + 255, + 533 + ], + "spans": [ + { + "bbox": [ + 250, + 527, + 255, + 533 + ], + "type": "text", + "content": "k" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 246, + 536, + 250, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 536, + 250, + 543 + ], + "spans": [ + { + "bbox": [ + 246, + 536, + 250, + 543 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 301, + 507, + 375, + 553 + ], + "blocks": [ + { + "bbox": [ + 301, + 507, + 375, + 553 + ], + "lines": [ + { + "bbox": [ + 301, + 507, + 375, + 553 + ], + "spans": [ + { + "bbox": [ + 301, + 507, + 375, + 553 + ], + "type": "image", + "image_path": "93a27658ebd2c5c8731b22d0f66a24ef38811798b21d2aed42890da244cb3bbc.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 377, + 499, + 476, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 499, + 476, + 506 + ], + "spans": [ + { + "bbox": [ + 377, + 499, + 476, + 506 + ], + "type": "text", + "content": "What color does not exist in this image?" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 377, + 518, + 399, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 518, + 399, + 525 + ], + "spans": [ + { + "bbox": [ + 377, + 518, + 399, + 525 + ], + "type": "text", + "content": "A:Green" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 377, + 527, + 395, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 527, + 395, + 533 + ], + "spans": [ + { + "bbox": [ + 377, + 527, + 395, + 533 + ], + "type": "text", + "content": "C:Red" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 377, + 545, + 395, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 545, + 395, + 552 + ], + "spans": [ + { + "bbox": [ + 377, + 545, + 395, + 552 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 413, + 518, + 435, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 518, + 435, + 525 + ], + "spans": [ + { + "bbox": [ + 413, + 518, + 435, + 525 + ], + "type": "text", + "content": "B:White" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 414, + 527, + 434, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 414, + 527, + 434, + 533 + ], + "spans": [ + { + "bbox": [ + 414, + 527, + 434, + 533 + ], + "type": "text", + "content": "D: Black" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 129, + 614, + 198, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 614, + 198, + 624 + ], + "spans": [ + { + "bbox": [ + 129, + 614, + 198, + 624 + ], + "type": "text", + "content": "Color Extraction" + } + ] + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 130, + 631, + 190, + 690 + ], + "blocks": [ + { + "bbox": [ + 130, + 631, + 190, + 690 + ], + "lines": [ + { + "bbox": [ + 130, + 631, + 190, + 690 + ], + "spans": [ + { + "bbox": [ + 130, + 631, + 190, + 690 + ], + "type": "image", + "image_path": "a02c7368ef7054fc8fa6a2c0d8c8c929988f22d64fb1347be844baea5b8b688d.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "bbox": [ + 195, + 637, + 295, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 637, + 295, + 651 + ], + "spans": [ + { + "bbox": [ + 195, + 637, + 295, + 651 + ], + "type": "text", + "content": "What is the HSV value of the given color in the image?" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 195, + 651, + 232, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 651, + 232, + 657 + ], + "spans": [ + { + "bbox": [ + 195, + 651, + 232, + 657 + ], + "type": "text", + "content": "A: [100, 51, 81]" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 195, + 657, + 237, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 657, + 237, + 663 + ], + "spans": [ + { + "bbox": [ + 195, + 657, + 237, + 663 + ], + "type": "text", + "content": "C: [331, 100, 100]" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 195, + 666, + 213, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 666, + 213, + 672 + ], + "spans": [ + { + "bbox": [ + 195, + 666, + 213, + 672 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 250, + 652, + 290, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 652, + 290, + 658 + ], + "spans": [ + { + "bbox": [ + 250, + 652, + 290, + 658 + ], + "type": "text", + "content": "B: [329, 98, 100]" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 250, + 658, + 293, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 658, + 293, + 664 + ], + "spans": [ + { + "bbox": [ + 250, + 658, + 293, + 664 + ], + "type": "text", + "content": "D:[329,100,100]" + } + ] + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 307, + 631, + 367, + 690 + ], + "blocks": [ + { + "bbox": [ + 307, + 631, + 367, + 690 + ], + "lines": [ + { + "bbox": [ + 307, + 631, + 367, + 690 + ], + "spans": [ + { + "bbox": [ + 307, + 631, + 367, + 690 + ], + "type": "image", + "image_path": "1b37b28329678a654e39a0697054f7a40e8872fd6c0581a7e3548f4779bda5a8.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 214, + 696, + 395, + 708 + ], + "lines": [ + { + "bbox": [ + 214, + 696, + 395, + 708 + ], + "spans": [ + { + "bbox": [ + 214, + 696, + 395, + 708 + ], + "type": "text", + "content": "Figure 7: Cases for Color Extraction Task." + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 29 + }, + { + "bbox": [ + 376, + 637, + 470, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 637, + 470, + 651 + ], + "spans": [ + { + "bbox": [ + 376, + 637, + 470, + 651 + ], + "type": "text", + "content": "Q: What is the HSV value of the given color in the image?" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 377, + 651, + 414, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 651, + 414, + 657 + ], + "spans": [ + { + "bbox": [ + 377, + 651, + 414, + 657 + ], + "type": "text", + "content": "A: [47, 62, 100]" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 377, + 657, + 414, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 657, + 414, + 664 + ], + "spans": [ + { + "bbox": [ + 377, + 657, + 414, + 664 + ], + "type": "text", + "content": "C: [45, 64, 100]" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 433, + 652, + 468, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 433, + 652, + 468, + 658 + ], + "spans": [ + { + "bbox": [ + 433, + 652, + 468, + 658 + ], + "type": "text", + "content": "B: [107, 16, 22]" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 433, + 658, + 469, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 433, + 658, + 469, + 664 + ], + "spans": [ + { + "bbox": [ + 433, + 658, + 469, + 664 + ], + "type": "text", + "content": "D: [45, 62, 100]" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 377, + 666, + 395, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 666, + 395, + 672 + ], + "spans": [ + { + "bbox": [ + 377, + 666, + 395, + 672 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 35 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 37 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 129, + 77, + 206, + 88 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 77, + 206, + 88 + ], + "spans": [ + { + "bbox": [ + 129, + 77, + 206, + 88 + ], + "type": "text", + "content": "Object Recognition" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 126, + 103, + 192, + 143 + ], + "blocks": [ + { + "bbox": [ + 126, + 103, + 192, + 143 + ], + "lines": [ + { + "bbox": [ + 126, + 103, + 192, + 143 + ], + "spans": [ + { + "bbox": [ + 126, + 103, + 192, + 143 + ], + "type": "image", + "image_path": "18c760c4ae1520c81e0481fb54b7507248b59275ff01d03eaf3d1cd7c636663f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 193, + 103, + 276, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 193, + 103, + 276, + 118 + ], + "spans": [ + { + "bbox": [ + 193, + 103, + 276, + 118 + ], + "type": "text", + "content": "Which state does not have a color of pink in this image?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 194, + 118, + 221, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 118, + 221, + 124 + ], + "spans": [ + { + "bbox": [ + 194, + 118, + 221, + 124 + ], + "type": "text", + "content": "A: Montana" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 194, + 125, + 222, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 125, + 222, + 130 + ], + "spans": [ + { + "bbox": [ + 194, + 125, + 222, + 130 + ], + "type": "text", + "content": "C: Michigan" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 194, + 131, + 211, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 131, + 211, + 137 + ], + "spans": [ + { + "bbox": [ + 194, + 131, + 211, + 137 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 229, + 125, + 259, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 229, + 125, + 259, + 130 + ], + "spans": [ + { + "bbox": [ + 229, + 125, + 259, + 130 + ], + "type": "text", + "content": "D:New York" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 229, + 131, + 259, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 229, + 131, + 259, + 137 + ], + "spans": [ + { + "bbox": [ + 229, + 131, + 259, + 137 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 295, + 90, + 369, + 152 + ], + "blocks": [ + { + "bbox": [ + 295, + 90, + 369, + 152 + ], + "lines": [ + { + "bbox": [ + 295, + 90, + 369, + 152 + ], + "spans": [ + { + "bbox": [ + 295, + 90, + 369, + 152 + ], + "type": "image", + "image_path": "cfd76bcaade75240c9606f3672221aa8ff31006fc41108e3930797fad4e317d5.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 209, + 157, + 400, + 169 + ], + "lines": [ + { + "bbox": [ + 209, + 157, + 400, + 169 + ], + "spans": [ + { + "bbox": [ + 209, + 157, + 400, + 169 + ], + "type": "text", + "content": "Figure 8: Cases for Object Recognition Task." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 371, + 103, + 469, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 103, + 469, + 118 + ], + "spans": [ + { + "bbox": [ + 371, + 103, + 469, + 118 + ], + "type": "text", + "content": "Which object has a color of black in this image?" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 371, + 118, + 433, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 118, + 433, + 124 + ], + "spans": [ + { + "bbox": [ + 371, + 118, + 433, + 124 + ], + "type": "text", + "content": "A: Background B: Banana" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 371, + 125, + 432, + 131 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 125, + 432, + 131 + ], + "spans": [ + { + "bbox": [ + 371, + 125, + 432, + 131 + ], + "type": "text", + "content": "C:Apple D:Orange" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 371, + 131, + 389, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 131, + 389, + 137 + ], + "spans": [ + { + "bbox": [ + 371, + 131, + 389, + 137 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 129, + 185, + 199, + 196 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 185, + 199, + 196 + ], + "spans": [ + { + "bbox": [ + 129, + 185, + 199, + 196 + ], + "type": "text", + "content": "Color Proportion" + } + ] + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 126, + 206, + 194, + 259 + ], + "blocks": [ + { + "bbox": [ + 126, + 206, + 194, + 259 + ], + "lines": [ + { + "bbox": [ + 126, + 206, + 194, + 259 + ], + "spans": [ + { + "bbox": [ + 126, + 206, + 194, + 259 + ], + "type": "image", + "image_path": "0c29ec18819b298f76ebf7a6f58747cce256328df6b98f545ad8b56d5243460e.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "bbox": [ + 195, + 216, + 271, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 216, + 271, + 222 + ], + "spans": [ + { + "bbox": [ + 195, + 216, + 271, + 222 + ], + "type": "text", + "content": "Which is the dominant color in" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 195, + 223, + 230, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 223, + 230, + 228 + ], + "spans": [ + { + "bbox": [ + 195, + 223, + 230, + 228 + ], + "type": "text", + "content": "this painting?" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 232, + 228, + 254, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 232, + 228, + 254, + 234 + ], + "spans": [ + { + "bbox": [ + 232, + 228, + 254, + 234 + ], + "type": "text", + "content": "B:Yellow" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 195, + 235, + 217, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 235, + 217, + 240 + ], + "spans": [ + { + "bbox": [ + 195, + 235, + 217, + 240 + ], + "type": "text", + "content": "C:Green" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 232, + 235, + 257, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 232, + 235, + 257, + 240 + ], + "spans": [ + { + "bbox": [ + 232, + 235, + 257, + 240 + ], + "type": "text", + "content": "D:Orange" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 195, + 240, + 214, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 240, + 214, + 246 + ], + "spans": [ + { + "bbox": [ + 195, + 240, + 214, + 246 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 299, + 194, + 373, + 269 + ], + "blocks": [ + { + "bbox": [ + 299, + 194, + 373, + 269 + ], + "lines": [ + { + "bbox": [ + 299, + 194, + 373, + 269 + ], + "spans": [ + { + "bbox": [ + 299, + 194, + 373, + 269 + ], + "type": "image", + "image_path": "32f6062225a61b9023255908621e965eb6ba41bfa8bab62987f76152e77b5086.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 213, + 274, + 396, + 286 + ], + "lines": [ + { + "bbox": [ + 213, + 274, + 396, + 286 + ], + "spans": [ + { + "bbox": [ + 213, + 274, + 396, + 286 + ], + "type": "text", + "content": "Figure 9: Cases for Color Proportion Task." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "bbox": [ + 375, + 216, + 471, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 375, + 216, + 471, + 222 + ], + "spans": [ + { + "bbox": [ + 375, + 216, + 471, + 222 + ], + "type": "text", + "content": "What is closest to the proportion of the" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 375, + 223, + 433, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 375, + 223, + 433, + 228 + ], + "spans": [ + { + "bbox": [ + 375, + 223, + 433, + 228 + ], + "type": "text", + "content": "color red in the image?" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 376, + 229, + 429, + 235 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 229, + 429, + 235 + ], + "spans": [ + { + "bbox": [ + 376, + 229, + 429, + 235 + ], + "type": "text", + "content": "A:10% B:20%" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 376, + 235, + 430, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 235, + 430, + 241 + ], + "spans": [ + { + "bbox": [ + 376, + 235, + 430, + 241 + ], + "type": "text", + "content": "C:30% D:40%" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 376, + 241, + 428, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 241, + 428, + 247 + ], + "spans": [ + { + "bbox": [ + 376, + 241, + 428, + 247 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 129, + 303, + 205, + 314 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 303, + 205, + 314 + ], + "spans": [ + { + "bbox": [ + 129, + 303, + 205, + 314 + ], + "type": "text", + "content": "Color Comparison" + } + ] + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 127, + 320, + 194, + 365 + ], + "blocks": [ + { + "bbox": [ + 127, + 320, + 194, + 365 + ], + "lines": [ + { + "bbox": [ + 127, + 320, + 194, + 365 + ], + "spans": [ + { + "bbox": [ + 127, + 320, + 194, + 365 + ], + "type": "image", + "image_path": "3895f7a993c176931085bf834b9296b28c562d90587c8c53b8684f4dd554cc97.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 208, + 379, + 402, + 391 + ], + "lines": [ + { + "bbox": [ + 208, + 379, + 402, + 391 + ], + "spans": [ + { + "bbox": [ + 208, + 379, + 402, + 391 + ], + "type": "text", + "content": "Figure 10: Cases for Color Comparison Task." + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_caption" + } + ], + "index": 30 + }, + { + "bbox": [ + 195, + 323, + 294, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 323, + 294, + 331 + ], + "spans": [ + { + "bbox": [ + 195, + 323, + 294, + 331 + ], + "type": "text", + "content": "Which photo is warmer in overall color?" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 195, + 343, + 230, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 343, + 230, + 350 + ], + "spans": [ + { + "bbox": [ + 195, + 343, + 230, + 350 + ], + "type": "text", + "content": "A: The left one" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 195, + 352, + 233, + 359 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 352, + 233, + 359 + ], + "spans": [ + { + "bbox": [ + 195, + 352, + 233, + 359 + ], + "type": "text", + "content": "B: The right one" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 195, + 361, + 214, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 361, + 214, + 367 + ], + "spans": [ + { + "bbox": [ + 195, + 361, + 214, + 367 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 299, + 323, + 373, + 367 + ], + "blocks": [ + { + "bbox": [ + 299, + 323, + 373, + 367 + ], + "lines": [ + { + "bbox": [ + 299, + 323, + 373, + 367 + ], + "spans": [ + { + "bbox": [ + 299, + 323, + 373, + 367 + ], + "type": "image", + "image_path": "c9df2e9b61580feeede61431af686096da173946a751c8558d27c9ce338b6322.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "bbox": [ + 376, + 324, + 473, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 324, + 473, + 331 + ], + "spans": [ + { + "bbox": [ + 376, + 324, + 473, + 331 + ], + "type": "text", + "content": "Which dog has the darkest color in the" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 376, + 333, + 396, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 333, + 396, + 340 + ], + "spans": [ + { + "bbox": [ + 376, + 333, + 396, + 340 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 377, + 342, + 395, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 342, + 395, + 349 + ], + "spans": [ + { + "bbox": [ + 377, + 342, + 395, + 349 + ], + "type": "text", + "content": "A: No.1" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 417, + 343, + 435, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 417, + 343, + 435, + 349 + ], + "spans": [ + { + "bbox": [ + 417, + 343, + 435, + 349 + ], + "type": "text", + "content": "B: No.4" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 377, + 351, + 396, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 351, + 396, + 357 + ], + "spans": [ + { + "bbox": [ + 377, + 351, + 396, + 357 + ], + "type": "text", + "content": "C.No.5" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 417, + 351, + 435, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 417, + 351, + 435, + 357 + ], + "spans": [ + { + "bbox": [ + 417, + 351, + 435, + 357 + ], + "type": "text", + "content": "D.No.3" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 377, + 361, + 395, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 361, + 395, + 367 + ], + "spans": [ + { + "bbox": [ + 377, + 361, + 395, + 367 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 128, + 407, + 193, + 418 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 407, + 193, + 418 + ], + "spans": [ + { + "bbox": [ + 128, + 407, + 193, + 418 + ], + "type": "text", + "content": "Color Counting" + } + ] + } + ], + "index": 44 + }, + { + "type": "image", + "bbox": [ + 127, + 421, + 193, + 487 + ], + "blocks": [ + { + "bbox": [ + 127, + 421, + 193, + 487 + ], + "lines": [ + { + "bbox": [ + 127, + 421, + 193, + 487 + ], + "spans": [ + { + "bbox": [ + 127, + 421, + 193, + 487 + ], + "type": "image", + "image_path": "c59a95f242d2784c8810f7e73553fcf63b0050874959eb29f65bbb4b686ffa7e.jpg" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 214, + 491, + 395, + 504 + ], + "lines": [ + { + "bbox": [ + 214, + 491, + 395, + 504 + ], + "spans": [ + { + "bbox": [ + 214, + 491, + 395, + 504 + ], + "type": "text", + "content": "Figure 11: Cases for Color Counting Task." + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_caption" + } + ], + "index": 45 + }, + { + "bbox": [ + 194, + 430, + 296, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 430, + 296, + 437 + ], + "spans": [ + { + "bbox": [ + 194, + 430, + 296, + 437 + ], + "type": "text", + "content": "How many different colors of flowers are" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 195, + 438, + 231, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 438, + 231, + 446 + ], + "spans": [ + { + "bbox": [ + 195, + 438, + 231, + 446 + ], + "type": "text", + "content": "in this image?" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 195, + 449, + 205, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 449, + 205, + 455 + ], + "spans": [ + { + "bbox": [ + 195, + 449, + 205, + 455 + ], + "type": "text", + "content": "A:1" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 236, + 449, + 247, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 449, + 247, + 455 + ], + "spans": [ + { + "bbox": [ + 236, + 449, + 247, + 455 + ], + "type": "text", + "content": "B:2" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 195, + 458, + 205, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 458, + 205, + 464 + ], + "spans": [ + { + "bbox": [ + 195, + 458, + 205, + 464 + ], + "type": "text", + "content": "C:3" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 236, + 459, + 247, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 459, + 247, + 464 + ], + "spans": [ + { + "bbox": [ + 236, + 459, + 247, + 464 + ], + "type": "text", + "content": "D:4" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 195, + 468, + 213, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 468, + 213, + 473 + ], + "spans": [ + { + "bbox": [ + 195, + 468, + 213, + 473 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 52 + }, + { + "type": "image", + "bbox": [ + 298, + 430, + 375, + 475 + ], + "blocks": [ + { + "bbox": [ + 298, + 430, + 375, + 475 + ], + "lines": [ + { + "bbox": [ + 298, + 430, + 375, + 475 + ], + "spans": [ + { + "bbox": [ + 298, + 430, + 375, + 475 + ], + "type": "image", + "image_path": "187728bef0463527b053b025dc76e89d6d940087929b400dc905b95ef1255834.jpg" + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_body" + } + ], + "index": 53 + }, + { + "bbox": [ + 376, + 430, + 474, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 430, + 474, + 437 + ], + "spans": [ + { + "bbox": [ + 376, + 430, + 474, + 437 + ], + "type": "text", + "content": "How many colors are there in this flag?" + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 377, + 449, + 388, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 449, + 388, + 455 + ], + "spans": [ + { + "bbox": [ + 377, + 449, + 388, + 455 + ], + "type": "text", + "content": "A:3" + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 406, + 449, + 416, + 455 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 449, + 416, + 455 + ], + "spans": [ + { + "bbox": [ + 406, + 449, + 416, + 455 + ], + "type": "text", + "content": "B:4" + } + ] + } + ], + "index": 56 + }, + { + "bbox": [ + 377, + 458, + 388, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 458, + 388, + 464 + ], + "spans": [ + { + "bbox": [ + 377, + 458, + 388, + 464 + ], + "type": "text", + "content": "C:5" + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 406, + 459, + 417, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 459, + 417, + 464 + ], + "spans": [ + { + "bbox": [ + 406, + 459, + 417, + 464 + ], + "type": "text", + "content": "D:6" + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 377, + 467, + 395, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 467, + 395, + 473 + ], + "spans": [ + { + "bbox": [ + 377, + 467, + 395, + 473 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 128, + 520, + 197, + 531 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 520, + 197, + 531 + ], + "spans": [ + { + "bbox": [ + 128, + 520, + 197, + 531 + ], + "type": "text", + "content": "Object Counting" + } + ] + } + ], + "index": 61 + }, + { + "type": "image", + "bbox": [ + 126, + 544, + 192, + 582 + ], + "blocks": [ + { + "bbox": [ + 126, + 544, + 192, + 582 + ], + "lines": [ + { + "bbox": [ + 126, + 544, + 192, + 582 + ], + "spans": [ + { + "bbox": [ + 126, + 544, + 192, + 582 + ], + "type": "image", + "image_path": "2d13679fef5fdb3ddb30ad79d2df8fc4de3919117e6c08e7f0e7a582bebed2b9.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 212, + 599, + 397, + 611 + ], + "lines": [ + { + "bbox": [ + 212, + 599, + 397, + 611 + ], + "spans": [ + { + "bbox": [ + 212, + 599, + 397, + 611 + ], + "type": "text", + "content": "Figure 12: Cases for Object Counting Task." + } + ] + } + ], + "index": 80, + "angle": 0, + "type": "image_caption" + } + ], + "index": 62 + }, + { + "bbox": [ + 195, + 536, + 298, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 536, + 298, + 543 + ], + "spans": [ + { + "bbox": [ + 195, + 536, + 298, + 543 + ], + "type": "text", + "content": "How many striped animals can be seen in" + } + ] + } + ], + "index": 63 + }, + { + "bbox": [ + 195, + 544, + 225, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 544, + 225, + 552 + ], + "spans": [ + { + "bbox": [ + 195, + 544, + 225, + 552 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 64 + }, + { + "bbox": [ + 195, + 553, + 209, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 553, + 209, + 559 + ], + "spans": [ + { + "bbox": [ + 195, + 553, + 209, + 559 + ], + "type": "text", + "content": "A:12" + } + ] + } + ], + "index": 65 + }, + { + "bbox": [ + 236, + 554, + 249, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 554, + 249, + 559 + ], + "spans": [ + { + "bbox": [ + 236, + 554, + 249, + 559 + ], + "type": "text", + "content": "B:11" + } + ] + } + ], + "index": 66 + }, + { + "bbox": [ + 195, + 562, + 209, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 562, + 209, + 567 + ], + "spans": [ + { + "bbox": [ + 195, + 562, + 209, + 567 + ], + "type": "text", + "content": "C:13" + } + ] + } + ], + "index": 67 + }, + { + "bbox": [ + 236, + 562, + 247, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 562, + 247, + 567 + ], + "spans": [ + { + "bbox": [ + 236, + 562, + 247, + 567 + ], + "type": "text", + "content": "D:0" + } + ] + } + ], + "index": 68 + }, + { + "bbox": [ + 195, + 572, + 209, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 572, + 209, + 578 + ], + "spans": [ + { + "bbox": [ + 195, + 572, + 209, + 578 + ], + "type": "text", + "content": "F:10" + } + ] + } + ], + "index": 69 + }, + { + "bbox": [ + 195, + 580, + 213, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 580, + 213, + 587 + ], + "spans": [ + { + "bbox": [ + 195, + 580, + 213, + 587 + ], + "type": "text", + "content": "Ans:C" + } + ] + } + ], + "index": 70 + }, + { + "type": "image", + "bbox": [ + 299, + 544, + 373, + 586 + ], + "blocks": [ + { + "bbox": [ + 299, + 544, + 373, + 586 + ], + "lines": [ + { + "bbox": [ + 299, + 544, + 373, + 586 + ], + "spans": [ + { + "bbox": [ + 299, + 544, + 373, + 586 + ], + "type": "image", + "image_path": "c83c3ebd129460f15657e81fcfd27c4a3fe2ebdc33784f46981734411391b84c.jpg" + } + ] + } + ], + "index": 71, + "angle": 0, + "type": "image_body" + } + ], + "index": 71 + }, + { + "bbox": [ + 376, + 536, + 479, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 376, + 536, + 479, + 543 + ], + "spans": [ + { + "bbox": [ + 376, + 536, + 479, + 543 + ], + "type": "text", + "content": "How many green bananas can be seen in" + } + ] + } + ], + "index": 72 + }, + { + "bbox": [ + 377, + 544, + 406, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 544, + 406, + 552 + ], + "spans": [ + { + "bbox": [ + 377, + 544, + 406, + 552 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 73 + }, + { + "bbox": [ + 377, + 554, + 389, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 554, + 389, + 559 + ], + "spans": [ + { + "bbox": [ + 377, + 554, + 389, + 559 + ], + "type": "text", + "content": "A:6" + } + ] + } + ], + "index": 74 + }, + { + "bbox": [ + 413, + 554, + 424, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 554, + 424, + 559 + ], + "spans": [ + { + "bbox": [ + 413, + 554, + 424, + 559 + ], + "type": "text", + "content": "B:7" + } + ] + } + ], + "index": 75 + }, + { + "bbox": [ + 377, + 562, + 388, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 562, + 388, + 567 + ], + "spans": [ + { + "bbox": [ + 377, + 562, + 388, + 567 + ], + "type": "text", + "content": "C. 5" + } + ] + } + ], + "index": 76 + }, + { + "bbox": [ + 413, + 563, + 424, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 413, + 563, + 424, + 568 + ], + "spans": [ + { + "bbox": [ + 413, + 563, + 424, + 568 + ], + "type": "text", + "content": "D. 4" + } + ] + } + ], + "index": 77 + }, + { + "bbox": [ + 377, + 572, + 388, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 572, + 388, + 578 + ], + "spans": [ + { + "bbox": [ + 377, + 572, + 388, + 578 + ], + "type": "text", + "content": "E. 0" + } + ] + } + ], + "index": 78 + }, + { + "bbox": [ + 377, + 580, + 395, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 377, + 580, + 395, + 587 + ], + "spans": [ + { + "bbox": [ + 377, + 580, + 395, + 587 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 79 + }, + { + "bbox": [ + 129, + 627, + 184, + 636 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 627, + 184, + 636 + ], + "spans": [ + { + "bbox": [ + 129, + 627, + 184, + 636 + ], + "type": "text", + "content": "Color Illusion" + } + ] + } + ], + "index": 81 + }, + { + "type": "image", + "bbox": [ + 127, + 640, + 190, + 691 + ], + "blocks": [ + { + "bbox": [ + 127, + 640, + 190, + 691 + ], + "lines": [ + { + "bbox": [ + 127, + 640, + 190, + 691 + ], + "spans": [ + { + "bbox": [ + 127, + 640, + 190, + 691 + ], + "type": "image", + "image_path": "b805d5f51d8b61281e89468619a144287ec35d0946a6ec0ba5aa1b7bf5fcc398.jpg" + } + ] + } + ], + "index": 82, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 218, + 706, + 391, + 717 + ], + "lines": [ + { + "bbox": [ + 218, + 706, + 391, + 717 + ], + "spans": [ + { + "bbox": [ + 218, + 706, + 391, + 717 + ], + "type": "text", + "content": "Figure 13: Cases for Color Illusion Task." + } + ] + } + ], + "index": 100, + "angle": 0, + "type": "image_caption" + } + ], + "index": 82 + }, + { + "bbox": [ + 193, + 637, + 285, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 193, + 637, + 285, + 651 + ], + "spans": [ + { + "bbox": [ + 193, + 637, + 285, + 651 + ], + "type": "text", + "content": "Do the blocks labeled a and b have the same color/shade?" + } + ] + } + ], + "index": 83 + }, + { + "bbox": [ + 194, + 651, + 233, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 651, + 233, + 656 + ], + "spans": [ + { + "bbox": [ + 194, + 651, + 233, + 656 + ], + "type": "text", + "content": "A: No, a is darker." + } + ] + } + ], + "index": 84 + }, + { + "bbox": [ + 194, + 657, + 272, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 657, + 272, + 663 + ], + "spans": [ + { + "bbox": [ + 194, + 657, + 272, + 663 + ], + "type": "text", + "content": "B: Hard to tell without more context" + } + ] + } + ], + "index": 85 + }, + { + "bbox": [ + 194, + 664, + 288, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 664, + 288, + 670 + ], + "spans": [ + { + "bbox": [ + 194, + 664, + 288, + 670 + ], + "type": "text", + "content": "C: Yes, one appears darker due to how our" + } + ] + } + ], + "index": 86 + }, + { + "bbox": [ + 194, + 671, + 246, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 671, + 246, + 677 + ], + "spans": [ + { + "bbox": [ + 194, + 671, + 246, + 677 + ], + "type": "text", + "content": "eyes perceive shadows" + } + ] + } + ], + "index": 87 + }, + { + "bbox": [ + 194, + 678, + 233, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 678, + 233, + 685 + ], + "spans": [ + { + "bbox": [ + 194, + 678, + 233, + 685 + ], + "type": "text", + "content": "D: No, b is darker" + } + ] + } + ], + "index": 88 + }, + { + "bbox": [ + 194, + 685, + 211, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 685, + 211, + 691 + ], + "spans": [ + { + "bbox": [ + 194, + 685, + 211, + 691 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 89 + }, + { + "type": "image", + "bbox": [ + 295, + 647, + 366, + 687 + ], + "blocks": [ + { + "bbox": [ + 295, + 647, + 366, + 687 + ], + "lines": [ + { + "bbox": [ + 295, + 647, + 366, + 687 + ], + "spans": [ + { + "bbox": [ + 295, + 647, + 366, + 687 + ], + "type": "image", + "image_path": "096c76644a54fa854232af032350f879fae6e8bc766e21703ba952a24b01f5d3.jpg" + } + ] + } + ], + "index": 90, + "angle": 0, + "type": "image_body" + } + ], + "index": 90 + }, + { + "bbox": [ + 369, + 634, + 440, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 634, + 440, + 640 + ], + "spans": [ + { + "bbox": [ + 369, + 634, + 440, + 640 + ], + "type": "text", + "content": "What colors are the two pills?" + } + ] + } + ], + "index": 91 + }, + { + "bbox": [ + 369, + 641, + 478, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 641, + 478, + 647 + ], + "spans": [ + { + "bbox": [ + 369, + 641, + 478, + 647 + ], + "type": "text", + "content": "A:Cannot tell from this image, the colors seem to" + } + ] + } + ], + "index": 92 + }, + { + "bbox": [ + 369, + 647, + 397, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 647, + 397, + 653 + ], + "spans": [ + { + "bbox": [ + 369, + 647, + 397, + 653 + ], + "type": "text", + "content": "be shifting?!" + } + ] + } + ], + "index": 93 + }, + { + "bbox": [ + 369, + 654, + 461, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 654, + 461, + 661 + ], + "spans": [ + { + "bbox": [ + 369, + 654, + 461, + 661 + ], + "type": "text", + "content": "B: Both are the exact same shade of gray" + } + ] + } + ], + "index": 94 + }, + { + "bbox": [ + 369, + 662, + 476, + 667 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 662, + 476, + 667 + ], + "spans": [ + { + "bbox": [ + 369, + 662, + 476, + 667 + ], + "type": "text", + "content": "C: The left one is bluish-gray and the right one is" + } + ] + } + ], + "index": 95 + }, + { + "bbox": [ + 369, + 668, + 397, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 668, + 397, + 674 + ], + "spans": [ + { + "bbox": [ + 369, + 668, + 397, + 674 + ], + "type": "text", + "content": "reddish-grey" + } + ] + } + ], + "index": 96 + }, + { + "bbox": [ + 369, + 675, + 479, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 675, + 479, + 681 + ], + "spans": [ + { + "bbox": [ + 369, + 675, + 479, + 681 + ], + "type": "text", + "content": "D: The left one is reddish-gray and the right one is" + } + ] + } + ], + "index": 97 + }, + { + "bbox": [ + 369, + 681, + 394, + 687 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 681, + 394, + 687 + ], + "spans": [ + { + "bbox": [ + 369, + 681, + 394, + 687 + ], + "type": "text", + "content": "bluish-grey" + } + ] + } + ], + "index": 98 + }, + { + "bbox": [ + 369, + 689, + 386, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 689, + 386, + 694 + ], + "spans": [ + { + "bbox": [ + 369, + 689, + 386, + 694 + ], + "type": "text", + "content": "Ans:B" + } + ] + } + ], + "index": 99 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 101 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 126, + 101, + 197, + 148 + ], + "blocks": [ + { + "bbox": [ + 129, + 83, + 192, + 95 + ], + "lines": [ + { + "bbox": [ + 129, + 83, + 192, + 95 + ], + "spans": [ + { + "bbox": [ + 129, + 83, + 192, + 95 + ], + "type": "text", + "content": "Color Mimicry" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 126, + 101, + 197, + 148 + ], + "lines": [ + { + "bbox": [ + 126, + 101, + 197, + 148 + ], + "spans": [ + { + "bbox": [ + 126, + 101, + 197, + 148 + ], + "type": "image", + "image_path": "5ac95b3d3706e6a80af07ac90289c6a7a098d2396288ef7980e9ae5f62e68f3f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 197, + 110, + 290, + 118 + ], + "lines": [ + { + "bbox": [ + 197, + 110, + 290, + 118 + ], + "spans": [ + { + "bbox": [ + 197, + 110, + 290, + 118 + ], + "type": "text", + "content": "How many seahorses in this image?" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 197, + 118, + 247, + 124 + ], + "lines": [ + { + "bbox": [ + 197, + 118, + 247, + 124 + ], + "spans": [ + { + "bbox": [ + 197, + 118, + 247, + 124 + ], + "type": "text", + "content": "A:0 B:1" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 197, + 124, + 247, + 131 + ], + "lines": [ + { + "bbox": [ + 197, + 124, + 247, + 131 + ], + "spans": [ + { + "bbox": [ + 197, + 124, + 247, + 131 + ], + "type": "text", + "content": "C:3 D:5" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 1 + }, + { + "bbox": [ + 197, + 133, + 216, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 133, + 216, + 138 + ], + "spans": [ + { + "bbox": [ + 197, + 133, + 216, + 138 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 306, + 91, + 382, + 148 + ], + "blocks": [ + { + "bbox": [ + 306, + 91, + 382, + 148 + ], + "lines": [ + { + "bbox": [ + 306, + 91, + 382, + 148 + ], + "spans": [ + { + "bbox": [ + 306, + 91, + 382, + 148 + ], + "type": "image", + "image_path": "06fe3b64b39e972bec5dcc62c1e8be491194b2477b95a126454c6e4e1834a0d6.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 383, + 110, + 465, + 118 + ], + "lines": [ + { + "bbox": [ + 383, + 110, + 465, + 118 + ], + "spans": [ + { + "bbox": [ + 383, + 110, + 465, + 118 + ], + "type": "text", + "content": "How many leaves in this image?" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 383, + 118, + 433, + 124 + ], + "lines": [ + { + "bbox": [ + 383, + 118, + 433, + 124 + ], + "spans": [ + { + "bbox": [ + 383, + 118, + 433, + 124 + ], + "type": "text", + "content": "A:1 B:2" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 383, + 124, + 433, + 131 + ], + "lines": [ + { + "bbox": [ + 383, + 124, + 433, + 131 + ], + "spans": [ + { + "bbox": [ + 383, + 124, + 433, + 131 + ], + "type": "text", + "content": "C:3 D:0" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 383, + 133, + 402, + 138 + ], + "lines": [ + { + "bbox": [ + 383, + 133, + 402, + 138 + ], + "spans": [ + { + "bbox": [ + 383, + 133, + 402, + 138 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 216, + 154, + 394, + 166 + ], + "lines": [ + { + "bbox": [ + 216, + 154, + 394, + 166 + ], + "spans": [ + { + "bbox": [ + 216, + 154, + 394, + 166 + ], + "type": "text", + "content": "Figure 14: Cases for Color Mimicry Task." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 131, + 207, + 190, + 266 + ], + "blocks": [ + { + "bbox": [ + 129, + 193, + 196, + 204 + ], + "lines": [ + { + "bbox": [ + 129, + 193, + 196, + 204 + ], + "spans": [ + { + "bbox": [ + 129, + 193, + 196, + 204 + ], + "type": "text", + "content": "Color Blindness" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 131, + 207, + 190, + 266 + ], + "lines": [ + { + "bbox": [ + 131, + 207, + 190, + 266 + ], + "spans": [ + { + "bbox": [ + 131, + 207, + 190, + 266 + ], + "type": "image", + "image_path": "0948b0e292c93b073f48dcbe6e1fab4efa29d2ace58bad4f6c81e00e85b21646.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 197, + 208, + 288, + 215 + ], + "lines": [ + { + "bbox": [ + 197, + 208, + 288, + 215 + ], + "spans": [ + { + "bbox": [ + 197, + 208, + 288, + 215 + ], + "type": "text", + "content": "There are two strings in the image." + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 197, + 217, + 290, + 224 + ], + "lines": [ + { + "bbox": [ + 197, + 217, + 290, + 224 + ], + "spans": [ + { + "bbox": [ + 197, + 217, + 290, + 224 + ], + "type": "text", + "content": "What are the strings in the center of" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 197, + 227, + 228, + 234 + ], + "lines": [ + { + "bbox": [ + 197, + 227, + 228, + 234 + ], + "spans": [ + { + "bbox": [ + 197, + 227, + 228, + 234 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 15 + }, + { + "bbox": [ + 197, + 236, + 253, + 253 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 197, + 236, + 253, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 236, + 253, + 243 + ], + "spans": [ + { + "bbox": [ + 197, + 236, + 253, + 243 + ], + "type": "text", + "content": "A:kt B:la" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 197, + 247, + 252, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 247, + 252, + 253 + ], + "spans": [ + { + "bbox": [ + 197, + 247, + 252, + 253 + ], + "type": "text", + "content": "C:lo D:It" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 197, + 257, + 217, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 257, + 217, + 263 + ], + "spans": [ + { + "bbox": [ + 197, + 257, + 217, + 263 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 315, + 206, + 375, + 266 + ], + "blocks": [ + { + "bbox": [ + 315, + 206, + 375, + 266 + ], + "lines": [ + { + "bbox": [ + 315, + 206, + 375, + 266 + ], + "spans": [ + { + "bbox": [ + 315, + 206, + 375, + 266 + ], + "type": "image", + "image_path": "4a4c31090dca597ec33169be0184de6511587b25241fd11621cd91ac03784810.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 386, + 208, + 477, + 215 + ], + "lines": [ + { + "bbox": [ + 386, + 208, + 477, + 215 + ], + "spans": [ + { + "bbox": [ + 386, + 208, + 477, + 215 + ], + "type": "text", + "content": "What is the number in the center of" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 386, + 217, + 417, + 224 + ], + "lines": [ + { + "bbox": [ + 386, + 217, + 417, + 224 + ], + "spans": [ + { + "bbox": [ + 386, + 217, + 417, + 224 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 214, + 274, + 395, + 285 + ], + "lines": [ + { + "bbox": [ + 214, + 274, + 395, + 285 + ], + "spans": [ + { + "bbox": [ + 214, + 274, + 395, + 285 + ], + "type": "text", + "content": "Figure 15: Cases for Color Blindness Task." + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "bbox": [ + 386, + 237, + 444, + 263 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 386, + 237, + 441, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 386, + 237, + 441, + 243 + ], + "spans": [ + { + "bbox": [ + 386, + 237, + 441, + 243 + ], + "type": "text", + "content": "A:6 B:9" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 386, + 246, + 444, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 386, + 246, + 444, + 253 + ], + "spans": [ + { + "bbox": [ + 386, + 246, + 444, + 253 + ], + "type": "text", + "content": "C:17 D:18" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 386, + 257, + 405, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 386, + 257, + 405, + 263 + ], + "spans": [ + { + "bbox": [ + 386, + 257, + 405, + 263 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 193, + 323, + 242, + 362 + ], + "blocks": [ + { + "bbox": [ + 192, + 309, + 239, + 319 + ], + "lines": [ + { + "bbox": [ + 192, + 309, + 239, + 319 + ], + "spans": [ + { + "bbox": [ + 192, + 309, + 239, + 319 + ], + "type": "text", + "content": "Original Image" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 193, + 323, + 242, + 362 + ], + "lines": [ + { + "bbox": [ + 193, + 323, + 242, + 362 + ], + "spans": [ + { + "bbox": [ + 193, + 323, + 242, + 362 + ], + "type": "image", + "image_path": "ca217e4f60851500ab5909e3956d6b23753e3df26cf75fbec365f442e2d1a763.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 271, + 319, + 369, + 327 + ], + "lines": [ + { + "bbox": [ + 271, + 319, + 369, + 327 + ], + "spans": [ + { + "bbox": [ + 271, + 319, + 369, + 327 + ], + "type": "text", + "content": "Q: How many cars are in the image?" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 190, + 380, + 241, + 418 + ], + "blocks": [ + { + "bbox": [ + 199, + 370, + 234, + 378 + ], + "lines": [ + { + "bbox": [ + 199, + 370, + 234, + 378 + ], + "spans": [ + { + "bbox": [ + 199, + 370, + 234, + 378 + ], + "type": "text", + "content": "Entire Image" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 190, + 380, + 241, + 418 + ], + "lines": [ + { + "bbox": [ + 190, + 380, + 241, + 418 + ], + "spans": [ + { + "bbox": [ + 190, + 380, + 241, + 418 + ], + "type": "image", + "image_path": "6672532a9af0fc12a496098717c189fd3b85762bf6de5bc2bb73d61a49b660e6.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 191, + 420, + 240, + 459 + ], + "blocks": [ + { + "bbox": [ + 191, + 420, + 240, + 459 + ], + "lines": [ + { + "bbox": [ + 191, + 420, + 240, + 459 + ], + "spans": [ + { + "bbox": [ + 191, + 420, + 240, + 459 + ], + "type": "image", + "image_path": "f4c76d4b9d7ef0158cfd40e735ea81e99ebd5429c71e7497bd686b591ce393cb.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 191, + 460, + 241, + 500 + ], + "blocks": [ + { + "bbox": [ + 191, + 460, + 241, + 500 + ], + "lines": [ + { + "bbox": [ + 191, + 460, + 241, + 500 + ], + "spans": [ + { + "bbox": [ + 191, + 460, + 241, + 500 + ], + "type": "image", + "image_path": "4da5d0436000119e3d94b5df4193a1ff89d878181f005bd58c77c387237eb2a9.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + } + ], + "index": 38 + }, + { + "type": "image", + "bbox": [ + 190, + 517, + 246, + 555 + ], + "blocks": [ + { + "bbox": [ + 195, + 507, + 241, + 516 + ], + "lines": [ + { + "bbox": [ + 195, + 507, + 241, + 516 + ], + "spans": [ + { + "bbox": [ + 195, + 507, + 241, + 516 + ], + "type": "text", + "content": "Original Image" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 190, + 517, + 246, + 555 + ], + "lines": [ + { + "bbox": [ + 190, + 517, + 246, + 555 + ], + "spans": [ + { + "bbox": [ + 190, + 517, + 246, + 555 + ], + "type": "image", + "image_path": "44823311c71f2dc3fb81ca2b03664810f631f3ea04ce2b1b322542a480d8034a.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "type": "image", + "bbox": [ + 187, + 575, + 245, + 613 + ], + "blocks": [ + { + "bbox": [ + 199, + 566, + 234, + 573 + ], + "lines": [ + { + "bbox": [ + 199, + 566, + 234, + 573 + ], + "spans": [ + { + "bbox": [ + 199, + 566, + 234, + 573 + ], + "type": "text", + "content": "Entire Image" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 187, + 575, + 245, + 613 + ], + "lines": [ + { + "bbox": [ + 187, + 575, + 245, + 613 + ], + "spans": [ + { + "bbox": [ + 187, + 575, + 245, + 613 + ], + "type": "image", + "image_path": "6c76abd6201d022bf4566da9d604a45a44987b51b8d18dfc5966144dbfbc2686.jpg" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_body" + } + ], + "index": 42 + }, + { + "type": "image", + "bbox": [ + 187, + 616, + 245, + 654 + ], + "blocks": [ + { + "bbox": [ + 187, + 616, + 245, + 654 + ], + "lines": [ + { + "bbox": [ + 187, + 616, + 245, + 654 + ], + "spans": [ + { + "bbox": [ + 187, + 616, + 245, + 654 + ], + "type": "image", + "image_path": "ffe4ed10afdb9bd97b47bb446b3526534aa50d91ef4e52855cb85f7758e83f19.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 188, + 656, + 246, + 694 + ], + "blocks": [ + { + "bbox": [ + 188, + 656, + 246, + 694 + ], + "lines": [ + { + "bbox": [ + 188, + 656, + 246, + 694 + ], + "spans": [ + { + "bbox": [ + 188, + 656, + 246, + 694 + ], + "type": "image", + "image_path": "998092a0d679346874dd97bcc680c4d3eee29ad064902230aae970fd80107fd8.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 210, + 700, + 399, + 712 + ], + "lines": [ + { + "bbox": [ + 210, + 700, + 399, + 712 + ], + "spans": [ + { + "bbox": [ + 210, + 700, + 399, + 712 + ], + "type": "text", + "content": "Figure 16: Cases for Color Robustness Task." + } + ] + } + ], + "index": 69, + "angle": 0, + "type": "image_caption" + } + ], + "index": 44 + }, + { + "type": "image", + "bbox": [ + 276, + 380, + 326, + 418 + ], + "blocks": [ + { + "bbox": [ + 302, + 334, + 384, + 341 + ], + "lines": [ + { + "bbox": [ + 302, + 334, + 384, + 341 + ], + "spans": [ + { + "bbox": [ + 302, + 334, + 384, + 341 + ], + "type": "text", + "content": "A:8 B:7 C:6 D:5 E:4" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 271, + 348, + 289, + 355 + ], + "lines": [ + { + "bbox": [ + 271, + 348, + 289, + 355 + ], + "spans": [ + { + "bbox": [ + 271, + 348, + 289, + 355 + ], + "type": "text", + "content": "GT: E" + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 276, + 361, + 328, + 368 + ], + "lines": [ + { + "bbox": [ + 276, + 361, + 328, + 368 + ], + "spans": [ + { + "bbox": [ + 276, + 361, + 328, + 368 + ], + "type": "text", + "content": "Recoloring Strategy" + } + ] + } + ], + "index": 48, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 276, + 371, + 327, + 379 + ], + "lines": [ + { + "bbox": [ + 276, + 371, + 327, + 379 + ], + "spans": [ + { + "bbox": [ + 276, + 371, + 327, + 379 + ], + "type": "text", + "content": "Targeted Segment" + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 276, + 380, + 326, + 418 + ], + "lines": [ + { + "bbox": [ + 276, + 380, + 326, + 418 + ], + "spans": [ + { + "bbox": [ + 276, + 380, + 326, + 418 + ], + "type": "image", + "image_path": "53f841542a5892cc7195a412eac039828510960339bd49bdfb8d91a9da68ed9a.jpg" + } + ] + } + ], + "index": 50, + "angle": 0, + "type": "image_body" + } + ], + "index": 50 + }, + { + "type": "image", + "bbox": [ + 276, + 420, + 326, + 459 + ], + "blocks": [ + { + "bbox": [ + 276, + 420, + 326, + 459 + ], + "lines": [ + { + "bbox": [ + 276, + 420, + 326, + 459 + ], + "spans": [ + { + "bbox": [ + 276, + 420, + 326, + 459 + ], + "type": "image", + "image_path": "88e474c633dff0071ce09a707335e5f72fddbae6f77191e56126aea2aadce529.jpg" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_body" + } + ], + "index": 51 + }, + { + "type": "image", + "bbox": [ + 276, + 460, + 326, + 499 + ], + "blocks": [ + { + "bbox": [ + 276, + 460, + 326, + 499 + ], + "lines": [ + { + "bbox": [ + 276, + 460, + 326, + 499 + ], + "spans": [ + { + "bbox": [ + 276, + 460, + 326, + 499 + ], + "type": "image", + "image_path": "d6d6ecd0cc66fed78dc928b0f30ad107b93312082826e23b451df48771aa2850.jpg" + } + ] + } + ], + "index": 52, + "angle": 0, + "type": "image_body" + } + ], + "index": 52 + }, + { + "type": "image", + "bbox": [ + 273, + 575, + 331, + 613 + ], + "blocks": [ + { + "bbox": [ + 277, + 544, + 295, + 551 + ], + "lines": [ + { + "bbox": [ + 277, + 544, + 295, + 551 + ], + "spans": [ + { + "bbox": [ + 277, + 544, + 295, + 551 + ], + "type": "text", + "content": "GT: C" + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 277, + 557, + 330, + 564 + ], + "lines": [ + { + "bbox": [ + 277, + 557, + 330, + 564 + ], + "spans": [ + { + "bbox": [ + 277, + 557, + 330, + 564 + ], + "type": "text", + "content": "Recoloring Strategy" + } + ] + } + ], + "index": 56, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 277, + 565, + 328, + 573 + ], + "lines": [ + { + "bbox": [ + 277, + 565, + 328, + 573 + ], + "spans": [ + { + "bbox": [ + 277, + 565, + 328, + 573 + ], + "type": "text", + "content": "Targeted Segment" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 273, + 575, + 331, + 613 + ], + "lines": [ + { + "bbox": [ + 273, + 575, + 331, + 613 + ], + "spans": [ + { + "bbox": [ + 273, + 575, + 331, + 613 + ], + "type": "image", + "image_path": "d33e9255a172a81dc60bd43741f083afdcf20d803b50e790a9fca9bb7545019e.jpg" + } + ] + } + ], + "index": 58, + "angle": 0, + "type": "image_body" + } + ], + "index": 58 + }, + { + "type": "image", + "bbox": [ + 273, + 615, + 331, + 654 + ], + "blocks": [ + { + "bbox": [ + 273, + 615, + 331, + 654 + ], + "lines": [ + { + "bbox": [ + 273, + 615, + 331, + 654 + ], + "spans": [ + { + "bbox": [ + 273, + 615, + 331, + 654 + ], + "type": "image", + "image_path": "0fd38bc8ef51f4bd35dc96cffacc79862640be794b363cf5fca27b37b8d42e63.jpg" + } + ] + } + ], + "index": 59, + "angle": 0, + "type": "image_body" + } + ], + "index": 59 + }, + { + "type": "image", + "bbox": [ + 273, + 655, + 331, + 694 + ], + "blocks": [ + { + "bbox": [ + 273, + 655, + 331, + 694 + ], + "lines": [ + { + "bbox": [ + 273, + 655, + 331, + 694 + ], + "spans": [ + { + "bbox": [ + 273, + 655, + 331, + 694 + ], + "type": "image", + "image_path": "a1915f5f8b1f4296129bd8d4bbb16cc8865b2463056ce4174fd6187db21bb86d.jpg" + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_body" + } + ], + "index": 60 + }, + { + "type": "image", + "bbox": [ + 367, + 380, + 417, + 418 + ], + "blocks": [ + { + "bbox": [ + 369, + 371, + 415, + 378 + ], + "lines": [ + { + "bbox": [ + 369, + 371, + 415, + 378 + ], + "spans": [ + { + "bbox": [ + 369, + 371, + 415, + 378 + ], + "type": "text", + "content": "Largest Segment" + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 367, + 380, + 417, + 418 + ], + "lines": [ + { + "bbox": [ + 367, + 380, + 417, + 418 + ], + "spans": [ + { + "bbox": [ + 367, + 380, + 417, + 418 + ], + "type": "image", + "image_path": "cb74dfc396d5b074ade375605653a193199cb27ee661f5620c34176342e8ddc8.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_body" + } + ], + "index": 62 + }, + { + "type": "image", + "bbox": [ + 367, + 420, + 417, + 459 + ], + "blocks": [ + { + "bbox": [ + 367, + 420, + 417, + 459 + ], + "lines": [ + { + "bbox": [ + 367, + 420, + 417, + 459 + ], + "spans": [ + { + "bbox": [ + 367, + 420, + 417, + 459 + ], + "type": "image", + "image_path": "81eb71371623bfb12b3890fc38ad3bb7fde78ee0837dd277574737492027befd.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + } + ], + "index": 63 + }, + { + "type": "image", + "bbox": [ + 367, + 460, + 417, + 500 + ], + "blocks": [ + { + "bbox": [ + 277, + 516, + 384, + 524 + ], + "lines": [ + { + "bbox": [ + 277, + 516, + 384, + 524 + ], + "spans": [ + { + "bbox": [ + 277, + 516, + 384, + 524 + ], + "type": "text", + "content": "Q: How many curtains are in the image?" + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 304, + 530, + 386, + 537 + ], + "lines": [ + { + "bbox": [ + 304, + 530, + 386, + 537 + ], + "spans": [ + { + "bbox": [ + 304, + 530, + 386, + 537 + ], + "type": "text", + "content": "A:3 B:2 C:1 D:4 E:0" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 367, + 460, + 417, + 500 + ], + "lines": [ + { + "bbox": [ + 367, + 460, + 417, + 500 + ], + "spans": [ + { + "bbox": [ + 367, + 460, + 417, + 500 + ], + "type": "image", + "image_path": "15026324cb3fa0e19610cc3840fb27b82c33d19f3d328ca0788bac9a4b9fb335.jpg" + } + ] + } + ], + "index": 64, + "angle": 0, + "type": "image_body" + } + ], + "index": 64 + }, + { + "type": "image", + "bbox": [ + 362, + 574, + 421, + 613 + ], + "blocks": [ + { + "bbox": [ + 364, + 566, + 413, + 573 + ], + "lines": [ + { + "bbox": [ + 364, + 566, + 413, + 573 + ], + "spans": [ + { + "bbox": [ + 364, + 566, + 413, + 573 + ], + "type": "text", + "content": "Largest Segment" + } + ] + } + ], + "index": 65, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 362, + 574, + 421, + 613 + ], + "lines": [ + { + "bbox": [ + 362, + 574, + 421, + 613 + ], + "spans": [ + { + "bbox": [ + 362, + 574, + 421, + 613 + ], + "type": "image", + "image_path": "5749a40d161e1b7bb688c3d83a6e0e261337db5f3519c1e8f08faed6ef13e27e.jpg" + } + ] + } + ], + "index": 66, + "angle": 0, + "type": "image_body" + } + ], + "index": 66 + }, + { + "type": "image", + "bbox": [ + 362, + 616, + 421, + 654 + ], + "blocks": [ + { + "bbox": [ + 362, + 616, + 421, + 654 + ], + "lines": [ + { + "bbox": [ + 362, + 616, + 421, + 654 + ], + "spans": [ + { + "bbox": [ + 362, + 616, + 421, + 654 + ], + "type": "image", + "image_path": "4a693bcdaf294d154fb77c045afebe8a5b9cbcac48c1bee722828b397c15364b.jpg" + } + ] + } + ], + "index": 67, + "angle": 0, + "type": "image_body" + } + ], + "index": 67 + }, + { + "type": "image", + "bbox": [ + 362, + 655, + 421, + 694 + ], + "blocks": [ + { + "bbox": [ + 362, + 655, + 421, + 694 + ], + "lines": [ + { + "bbox": [ + 362, + 655, + 421, + 694 + ], + "spans": [ + { + "bbox": [ + 362, + 655, + 421, + 694 + ], + "type": "image", + "image_path": "877da56a11e72700c2b772cc735b366254a17d7c0d52424c8c5fae8436785f8c.jpg" + } + ] + } + ], + "index": 68, + "angle": 0, + "type": "image_body" + } + ], + "index": 68 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 70 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 71, + 248, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 71, + 248, + 85 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 248, + 85 + ], + "type": "text", + "content": "E Implementation Details" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 95, + 506, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 95, + 506, + 227 + ], + "spans": [ + { + "bbox": [ + 104, + 95, + 506, + 227 + ], + "type": "text", + "content": "To further advance our understanding of VLMs' capabilities in color perception, reasoning, and robustness dimensions, we conduct an extensive evaluation of 32 vision-language models (VLMs) spanning a range of large language model (LLM) sizes and architectures. Our evaluation includes state-of-the-art models such as GPT-4o[35], Gemini-2-flash[7], LLaVA-OV[24], LLaVA-NEXT [31], Cambrian[42], InternVL2[5], InternVL2.5[5], Qwen2.5-VL[2], and Eagle[41]. GPT-4o and Gemini-2-flash are used with API calls. We further examine reasoning enhancement via chain-of-thought (CoT) prompting [44], applying it to GPT-4o and Gemini-2-Flash to evaluate how intermediate reasoning steps influence color understanding. Additionally, we include the most recent GPT-o3 on perception and reasoning tasks, which is the most powerful model with a long internal chain-of-thought process. This selection covers a diverse set of architectures, including both proprietary and open-source models, enabling a comprehensive assessment of their reasoning capabilities under different computational constraints." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 232, + 506, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 232, + 506, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 232, + 506, + 277 + ], + "type": "text", + "content": "To ensure a fair comparison, we standardize our experimental setup across models. Open-source models with fewer than 70B parameters are evaluated using a single NVIDIA A100 80GB GPU, while larger models require four NVIDIA A100 80GB GPUs to accommodate their increased memory and computational demands." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 290, + 231, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 290, + 231, + 304 + ], + "spans": [ + { + "bbox": [ + 105, + 290, + 231, + 304 + ], + "type": "text", + "content": "F Evaluation Prompts" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 110, + 323, + 501, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 323, + 501, + 357 + ], + "spans": [ + { + "bbox": [ + 110, + 323, + 501, + 357 + ], + "type": "text", + "content": "Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Do not explain your reasoning. Answer with only the letter that corresponds to the correct option. Do not repeat the entire answer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 371, + 501, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 371, + 501, + 417 + ], + "spans": [ + { + "bbox": [ + 110, + 371, + 501, + 417 + ], + "type": "text", + "content": "CoT Instruction Prompt You'll be given an image, an instruction and some options. You have to select the correct one. Think step by step before answering. Then conclude with the letter that corresponds to the correct option. Make sure the option letter is in the parentheses like (X). Do not include ( or ) in the response except for the answer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 434, + 228, + 447 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 434, + 228, + 447 + ], + "spans": [ + { + "bbox": [ + 105, + 434, + 228, + 447 + ], + "type": "text", + "content": "G Human Evaluation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 457, + 506, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 457, + 506, + 536 + ], + "spans": [ + { + "bbox": [ + 104, + 457, + 506, + 536 + ], + "type": "text", + "content": "To assess the degree of alignment between VLMs and human color understanding, we selected a representative subset of COLORBENCH, focusing specifically on color perception and reasoning tasks. The Color Extraction task was excluded from human annotation, as humans are generally not sensitive to fine-grained differences in color codes. Three human participants were recruited, each tasked with completing 50 samples per category. All evaluators responded to the full set of multiple-choice and judgment-oriented questions. We then gathered all responses and conducted statistical analysis on the collected human evaluations." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 550, + 340, + 563 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 550, + 340, + 563 + ], + "spans": [ + { + "bbox": [ + 105, + 550, + 340, + 563 + ], + "type": "text", + "content": "H Reasoning Models with Thinking Process" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 574, + 506, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 574, + 506, + 641 + ], + "spans": [ + { + "bbox": [ + 104, + 574, + 506, + 641 + ], + "type": "text", + "content": "To comprehensively assess the performance of VLMs with the thinking process on COLORBENCH, except for proprietary models with chain-of-thought(CoT) prompt, we additionally conduct experiments with GPT-o3 on perception and reasoning tasks. GPT-o3 is the most recent powerful proprietary VLMs that is trained to think before answering with reinforcement learning. We use the API version of GPT-o3 (2025-04-16) for evaluation. The result is shown in Table 8, together with results of CoT prompting and human evaluation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "content": "The results presented in Table 8 indicate that human evaluators achieve the highest performance across the majority of tasks, except for three specific categories: Object Recognition (O'Recog), Color Proportion (C'Prop), and Color Comparison (C'Comp), where GPT-o3 holds the highest scores. The performance differences between GPT-o3 and human evaluators on O'Recog and C'Comp tasks are relatively minor (less than " + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "content": "). However, GPT-o3 substantially outperforms both humans and other VLMs on the C'Prop task, with an advantage exceeding " + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "inline_equation", + "content": "12\\%" + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "content": ". This significant gap on C'Prop aligns with expectations, as humans generally exhibit lower sensitivity to precise quantitative measures." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 95 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 95 + ], + "type": "text", + "content": "Meanwhile, GPT-o3 benefits from including the capability to utilize analytical tools for precise image assessments and continuous exhaustive visual search [26] to obtain better proportion estimations." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 100, + 506, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 100, + 506, + 189 + ], + "spans": [ + { + "bbox": [ + 104, + 100, + 506, + 189 + ], + "type": "text", + "content": "On the remaining tasks, GPT-o3 consistently outperforms GPT-4o (CoT) and Gemini-2-flash (CoT), except for the Color Blindness (C'Blind) task, where GPT-o3 trails GPT-4o (CoT) by " + }, + { + "bbox": [ + 104, + 100, + 506, + 189 + ], + "type": "inline_equation", + "content": "3.7\\%" + }, + { + "bbox": [ + 104, + 100, + 506, + 189 + ], + "type": "text", + "content": ". The C'Blind task requires VLMs to accurately identify numbers or strings in an image that is composed of colored dots. This task demands capabilities of precise color recognition combined with a holistic spatial perception. One plausible reason for GPT-o3's inferior performance is its longer and more complex reasoning path, which may lead to overthinking. This might cause the model to focus too much on local details or choices of tool, at the expense of the global and intuitive perception needed for this task." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 192, + 506, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 192, + 506, + 237 + ], + "spans": [ + { + "bbox": [ + 104, + 192, + 506, + 237 + ], + "type": "text", + "content": "Overall, these findings highlight the relative strengths and weaknesses of current advanced VLMs compared to human evaluators. Importantly, there remains substantial room for improvement in VLM capabilities, as significant performance gaps persist between VLMs and humans, particularly in reasoning-intensive tasks." + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 106, + 308, + 504, + 387 + ], + "blocks": [ + { + "bbox": [ + 104, + 254, + 507, + 308 + ], + "lines": [ + { + "bbox": [ + 104, + 254, + 507, + 308 + ], + "spans": [ + { + "bbox": [ + 104, + 254, + 507, + 308 + ], + "type": "text", + "content": "Table 8: Performance of proprietary reasoning models with thinking processes on Color Perception and Reasoning Tasks. Models are ranked based on their overall performance on color perception and reasoning (P & R Overall) tasks. The best-performing model within the VLM group is highlighted in bold. For human evaluation, any instance that exceeds the performance of all VLMs is also highlighted in bold." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 308, + 504, + 387 + ], + "lines": [ + { + "bbox": [ + 106, + 308, + 504, + 387 + ], + "spans": [ + { + "bbox": [ + 106, + 308, + 504, + 387 + ], + "type": "table", + "html": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: Proprietary
GPT-4o (CoT)77.655.283.144.471.326.533.044.177.166.857.4
Gemini-2-flash (CoT)82.956.288.358.068.343.138.840.975.760.059.6
GPT-o3 (API)84.257.292.271.682.246.145.658.180.063.166.4
Human Evaluation
Human Evaluation92.0-90.159.679.862.081.363.083.894.0-
", + "image_path": "6696a3e56dcd41106cc9520c97ca6ef997d92e3da4928d10da388f6eb66d04e7.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 408, + 313, + 422 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 408, + 313, + 422 + ], + "spans": [ + { + "bbox": [ + 105, + 408, + 313, + 422 + ], + "type": "text", + "content": "I Qualitative Analysis of Failure Cases" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 433, + 504, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 433, + 504, + 489 + ], + "spans": [ + { + "bbox": [ + 104, + 433, + 504, + 489 + ], + "type": "text", + "content": "To gain deeper insights into VLM failures on color-related tasks, we conduct a detailed case analysis using Qwen2.5-VL-3B and 7B models on different tasks. Following the attention visualization methodology of Zhang et al. [49], we focus on instances where the 3B model fails but the 7B model succeeds, allowing a clearer examination of the underlying capability differences. The visualizations of attention maps are shown in Figure 17 to 25." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 493, + 506, + 560 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 493, + 506, + 560 + ], + "spans": [ + { + "bbox": [ + 104, + 493, + 506, + 560 + ], + "type": "text", + "content": "For Color Perception tasks, we analyze the Color Recognition and Object Recognition tasks (excluding Color Extraction, which contains single-color color images). Our preliminary findings show that only a small number of failures arise from incorrect object localization. In most cases, both models correctly attend to the relevant regions but still produce incorrect predictions. This indicates that VLMs cannot accurately interpret color information, rather than deficiencies in visual grounding for these basic perception tasks." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 563, + 506, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 563, + 506, + 653 + ], + "spans": [ + { + "bbox": [ + 104, + 563, + 506, + 653 + ], + "type": "text", + "content": "For Color Reasoning tasks, tasks such as Color Proportion, Color Comparison, Color Counting, and Color Illusion require integrating visual information across the entire image without a clear focus point. Attention maps show that both 3B and 7B models exhibit similar focus patterns but generate different answers, implying that the divergence mainly originates from the language reasoning component rather than the visual encoder. For tasks with explicit perception targets, including Object Counting, Color Mimicry, and Color Blindness, both models attend to the correct regions, yet the 3B model often fails to produce accurate predictions. These results reveal that current VLMs remain weak in color interpretability even when their attention is properly aligned." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 312, + 751 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 312, + 751 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 312, + 751 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 212, + 76, + 282, + 146 + ], + "blocks": [ + { + "bbox": [ + 212, + 76, + 282, + 146 + ], + "lines": [ + { + "bbox": [ + 212, + 76, + 282, + 146 + ], + "spans": [ + { + "bbox": [ + 212, + 76, + 282, + 146 + ], + "type": "image", + "image_path": "971e87a767c2d02708a7cea8a3800adeff0ccc472145183945234fcecbb87169.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 285, + 80, + 386, + 87 + ], + "lines": [ + { + "bbox": [ + 285, + 80, + 386, + 87 + ], + "spans": [ + { + "bbox": [ + 285, + 80, + 386, + 87 + ], + "type": "text", + "content": "What is the color of the banana in this" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 285, + 91, + 304, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 91, + 304, + 98 + ], + "spans": [ + { + "bbox": [ + 285, + 91, + 304, + 98 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 285, + 101, + 304, + 107 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 101, + 304, + 107 + ], + "spans": [ + { + "bbox": [ + 285, + 101, + 304, + 107 + ], + "type": "text", + "content": "A: Red" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 325, + 101, + 349, + 107 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 101, + 349, + 107 + ], + "spans": [ + { + "bbox": [ + 325, + 101, + 349, + 107 + ], + "type": "text", + "content": "B:Green" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 285, + 110, + 309, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 110, + 309, + 117 + ], + "spans": [ + { + "bbox": [ + 285, + 110, + 309, + 117 + ], + "type": "text", + "content": "C:Yellow" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 325, + 110, + 347, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 110, + 347, + 117 + ], + "spans": [ + { + "bbox": [ + 325, + 110, + 347, + 117 + ], + "type": "text", + "content": "D: Black" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 285, + 121, + 339, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 121, + 339, + 127 + ], + "spans": [ + { + "bbox": [ + 285, + 121, + 339, + 127 + ], + "type": "text", + "content": "E: None of the above" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 285, + 130, + 304, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 130, + 304, + 137 + ], + "spans": [ + { + "bbox": [ + 285, + 130, + 304, + 137 + ], + "type": "text", + "content": "Ans: E" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 166, + 149, + 445, + 268 + ], + "blocks": [ + { + "bbox": [ + 166, + 149, + 445, + 268 + ], + "lines": [ + { + "bbox": [ + 166, + 149, + 445, + 268 + ], + "spans": [ + { + "bbox": [ + 166, + 149, + 445, + 268 + ], + "type": "image", + "image_path": "5087bbbb5f96b492d6b311016dcce02b6e4f12ecd9e9eba8e797faa0bdecce5e.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 162, + 274, + 447, + 286 + ], + "lines": [ + { + "bbox": [ + 162, + 274, + 447, + 286 + ], + "spans": [ + { + "bbox": [ + 162, + 274, + 447, + 286 + ], + "type": "text", + "content": "Figure 17: Visualized Attention Maps for Color Recognition Tasks." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 192, + 301, + 300, + 373 + ], + "blocks": [ + { + "bbox": [ + 192, + 301, + 300, + 373 + ], + "lines": [ + { + "bbox": [ + 192, + 301, + 300, + 373 + ], + "spans": [ + { + "bbox": [ + 192, + 301, + 300, + 373 + ], + "type": "image", + "image_path": "50635e4a4b1df714a947e01dc9ddecc80979b357b7db276e0f815d4b4e049a57.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 301, + 308, + 395, + 315 + ], + "lines": [ + { + "bbox": [ + 301, + 308, + 395, + 315 + ], + "spans": [ + { + "bbox": [ + 301, + 308, + 395, + 315 + ], + "type": "text", + "content": "What object has green color in this" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 318, + 324, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 318, + 324, + 326 + ], + "spans": [ + { + "bbox": [ + 302, + 318, + 324, + 326 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 328, + 325, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 328, + 325, + 335 + ], + "spans": [ + { + "bbox": [ + 302, + 328, + 325, + 335 + ], + "type": "text", + "content": "A: Grass" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 343, + 327, + 369, + 334 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 343, + 327, + 369, + 334 + ], + "spans": [ + { + "bbox": [ + 343, + 327, + 369, + 334 + ], + "type": "text", + "content": "B:Flower" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 338, + 321, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 338, + 321, + 344 + ], + "spans": [ + { + "bbox": [ + 302, + 338, + 321, + 344 + ], + "type": "text", + "content": "C:Leaf" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 343, + 338, + 363, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 343, + 338, + 363, + 344 + ], + "spans": [ + { + "bbox": [ + 343, + 338, + 363, + 344 + ], + "type": "text", + "content": "D: Fruit" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 348, + 321, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 348, + 321, + 354 + ], + "spans": [ + { + "bbox": [ + 302, + 348, + 321, + 354 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 166, + 376, + 445, + 469 + ], + "blocks": [ + { + "bbox": [ + 166, + 376, + 445, + 469 + ], + "lines": [ + { + "bbox": [ + 166, + 376, + 445, + 469 + ], + "spans": [ + { + "bbox": [ + 166, + 376, + 445, + 469 + ], + "type": "image", + "image_path": "fa210125aa3d22e54cb9811de70703cd5921bf9d29a5e7a01dd3a531b460f26c.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 160, + 475, + 449, + 488 + ], + "lines": [ + { + "bbox": [ + 160, + 475, + 449, + 488 + ], + "spans": [ + { + "bbox": [ + 160, + 475, + 449, + 488 + ], + "type": "text", + "content": "Figure 18: Visualized Attention Maps for Object Recognition Tasks." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 214, + 508, + 276, + 571 + ], + "blocks": [ + { + "bbox": [ + 214, + 508, + 276, + 571 + ], + "lines": [ + { + "bbox": [ + 214, + 508, + 276, + 571 + ], + "spans": [ + { + "bbox": [ + 214, + 508, + 276, + 571 + ], + "type": "image", + "image_path": "add590e2395c5b4a230b5e76843887f0bfd0c9e74e535b99ab676e4a85929d4e.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 287, + 510, + 379, + 517 + ], + "lines": [ + { + "bbox": [ + 287, + 510, + 379, + 517 + ], + "spans": [ + { + "bbox": [ + 287, + 510, + 379, + 517 + ], + "type": "text", + "content": "What color in the pie chart has the" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 287, + 519, + 360, + 526 + ], + "lines": [ + { + "bbox": [ + 287, + 519, + 360, + 526 + ], + "spans": [ + { + "bbox": [ + 287, + 519, + 360, + 526 + ], + "type": "text", + "content": "proportion closest to " + }, + { + "bbox": [ + 287, + 519, + 360, + 526 + ], + "type": "inline_equation", + "content": "25\\%" + }, + { + "bbox": [ + 287, + 519, + 360, + 526 + ], + "type": "text", + "content": "?" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "bbox": [ + 287, + 529, + 351, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 529, + 351, + 536 + ], + "spans": [ + { + "bbox": [ + 287, + 529, + 351, + 536 + ], + "type": "text", + "content": "A: Light blue B:Green" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 287, + 540, + 348, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 540, + 348, + 547 + ], + "spans": [ + { + "bbox": [ + 287, + 540, + 348, + 547 + ], + "type": "text", + "content": "C: Purple D:Cyan" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 287, + 550, + 307, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 550, + 307, + 555 + ], + "spans": [ + { + "bbox": [ + 287, + 550, + 307, + 555 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 166, + 578, + 445, + 696 + ], + "blocks": [ + { + "bbox": [ + 166, + 578, + 445, + 696 + ], + "lines": [ + { + "bbox": [ + 166, + 578, + 445, + 696 + ], + "spans": [ + { + "bbox": [ + 166, + 578, + 445, + 696 + ], + "type": "image", + "image_path": "c6facafc15e401d6c68425642e147e60adf5498011430644825bbd7ee0537c12.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 164, + 703, + 444, + 715 + ], + "lines": [ + { + "bbox": [ + 164, + 703, + 444, + 715 + ], + "spans": [ + { + "bbox": [ + 164, + 703, + 444, + 715 + ], + "type": "text", + "content": "Figure 19: Visualized Attention Maps for Color Proportion Tasks." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 201, + 75, + 288, + 148 + ], + "blocks": [ + { + "bbox": [ + 201, + 75, + 288, + 148 + ], + "lines": [ + { + "bbox": [ + 201, + 75, + 288, + 148 + ], + "spans": [ + { + "bbox": [ + 201, + 75, + 288, + 148 + ], + "type": "image", + "image_path": "e2968b8a9c0fd3c158e3bea02d271adcea3ac376cd9b89fff66f51a56e443633.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 292, + 87, + 405, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 87, + 405, + 95 + ], + "spans": [ + { + "bbox": [ + 292, + 87, + 405, + 95 + ], + "type": "text", + "content": "Which lipstick in this image is the darkest" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 293, + 97, + 312, + 104 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 97, + 312, + 104 + ], + "spans": [ + { + "bbox": [ + 293, + 97, + 312, + 104 + ], + "type": "text", + "content": "color?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 293, + 106, + 315, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 106, + 315, + 114 + ], + "spans": [ + { + "bbox": [ + 293, + 106, + 315, + 114 + ], + "type": "text", + "content": "A:ACAI" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 353, + 107, + 386, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 107, + 386, + 114 + ], + "spans": [ + { + "bbox": [ + 353, + 107, + 386, + 114 + ], + "type": "text", + "content": "B: SANGRIA" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 293, + 117, + 339, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 117, + 339, + 124 + ], + "spans": [ + { + "bbox": [ + 293, + 117, + 339, + 124 + ], + "type": "text", + "content": "C:PASSION RED" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 353, + 117, + 389, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 353, + 117, + 389, + 124 + ], + "spans": [ + { + "bbox": [ + 353, + 117, + 389, + 124 + ], + "type": "text", + "content": "D: PINK CLAY" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 293, + 127, + 313, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 127, + 313, + 134 + ], + "spans": [ + { + "bbox": [ + 293, + 127, + 313, + 134 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 166, + 169, + 211, + 206 + ], + "blocks": [ + { + "bbox": [ + 166, + 169, + 211, + 206 + ], + "lines": [ + { + "bbox": [ + 166, + 169, + 211, + 206 + ], + "spans": [ + { + "bbox": [ + 166, + 169, + 211, + 206 + ], + "type": "image", + "image_path": "ebe28c76df70c5ce8ccb97d1d332bdbb848b826e49a2cb8661c134c846d09ceb.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 212, + 169, + 257, + 206 + ], + "blocks": [ + { + "bbox": [ + 212, + 169, + 257, + 206 + ], + "lines": [ + { + "bbox": [ + 212, + 169, + 257, + 206 + ], + "spans": [ + { + "bbox": [ + 212, + 169, + 257, + 206 + ], + "type": "image", + "image_path": "a07a140720b03acc33118f625e4d50c37e4c46e232872dbe80336db897030531.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 259, + 169, + 304, + 206 + ], + "blocks": [ + { + "bbox": [ + 283, + 160, + 328, + 167 + ], + "lines": [ + { + "bbox": [ + 283, + 160, + 328, + 167 + ], + "spans": [ + { + "bbox": [ + 283, + 160, + 328, + 167 + ], + "type": "text", + "content": "Qwen2.5-VL-3B" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 169, + 304, + 206 + ], + "lines": [ + { + "bbox": [ + 259, + 169, + 304, + 206 + ], + "spans": [ + { + "bbox": [ + 259, + 169, + 304, + 206 + ], + "type": "image", + "image_path": "502803c4b25067d3812819d9156ff26c57eba1d40729001effc16d7db38567cc.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 307, + 169, + 351, + 206 + ], + "blocks": [ + { + "bbox": [ + 307, + 169, + 351, + 206 + ], + "lines": [ + { + "bbox": [ + 307, + 169, + 351, + 206 + ], + "spans": [ + { + "bbox": [ + 307, + 169, + 351, + 206 + ], + "type": "image", + "image_path": "dad9c742ce073687e861db5cbdc225cf71a5e83bfd896f85a0eb676ba55ea560.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 353, + 169, + 397, + 206 + ], + "blocks": [ + { + "bbox": [ + 353, + 169, + 397, + 206 + ], + "lines": [ + { + "bbox": [ + 353, + 169, + 397, + 206 + ], + "spans": [ + { + "bbox": [ + 353, + 169, + 397, + 206 + ], + "type": "image", + "image_path": "b39f08f18e170c13c05003ddcd77bfc2996d090dfb6e4475ca2d89263859aeec.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 400, + 169, + 444, + 206 + ], + "blocks": [ + { + "bbox": [ + 400, + 169, + 444, + 206 + ], + "lines": [ + { + "bbox": [ + 400, + 169, + 444, + 206 + ], + "spans": [ + { + "bbox": [ + 400, + 169, + 444, + 206 + ], + "type": "image", + "image_path": "198e05f55f9336c87de7bb4cbdd438d7f2edcbcb1590f30c3cd73974e0cdc09a.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 166, + 224, + 210, + 261 + ], + "blocks": [ + { + "bbox": [ + 166, + 224, + 210, + 261 + ], + "lines": [ + { + "bbox": [ + 166, + 224, + 210, + 261 + ], + "spans": [ + { + "bbox": [ + 166, + 224, + 210, + 261 + ], + "type": "image", + "image_path": "af39cdfe500e95bdd08905edb4749d8129a2f8ee61d64bafab000d32e728a7c0.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 162, + 269, + 448, + 281 + ], + "lines": [ + { + "bbox": [ + 162, + 269, + 448, + 281 + ], + "spans": [ + { + "bbox": [ + 162, + 269, + 448, + 281 + ], + "type": "text", + "content": "Figure 20: Visualized Attention Maps for Color Comparison Tasks." + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 212, + 224, + 257, + 261 + ], + "blocks": [ + { + "bbox": [ + 212, + 224, + 257, + 261 + ], + "lines": [ + { + "bbox": [ + 212, + 224, + 257, + 261 + ], + "spans": [ + { + "bbox": [ + 212, + 224, + 257, + 261 + ], + "type": "image", + "image_path": "a2103a3962c6d4be98739201fc14b55d24278707289c018a67f8a5309310c679.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 259, + 224, + 304, + 261 + ], + "blocks": [ + { + "bbox": [ + 283, + 215, + 328, + 223 + ], + "lines": [ + { + "bbox": [ + 283, + 215, + 328, + 223 + ], + "spans": [ + { + "bbox": [ + 283, + 215, + 328, + 223 + ], + "type": "text", + "content": "Qwen2.5-VL-7B" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 224, + 304, + 261 + ], + "lines": [ + { + "bbox": [ + 259, + 224, + 304, + 261 + ], + "spans": [ + { + "bbox": [ + 259, + 224, + 304, + 261 + ], + "type": "image", + "image_path": "d3a29f42cb22cd1ea8c99c241ac8c5d1bfd2c1b5f3cce2cddd10a0ca1eab4d6d.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 307, + 224, + 351, + 261 + ], + "blocks": [ + { + "bbox": [ + 307, + 224, + 351, + 261 + ], + "lines": [ + { + "bbox": [ + 307, + 224, + 351, + 261 + ], + "spans": [ + { + "bbox": [ + 307, + 224, + 351, + 261 + ], + "type": "image", + "image_path": "77dc27ad408af46dbcd03238321afb88286d84c2b4ed903c844c328624a0bbbb.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 353, + 224, + 397, + 261 + ], + "blocks": [ + { + "bbox": [ + 353, + 224, + 397, + 261 + ], + "lines": [ + { + "bbox": [ + 353, + 224, + 397, + 261 + ], + "spans": [ + { + "bbox": [ + 353, + 224, + 397, + 261 + ], + "type": "image", + "image_path": "3570068575ee9af5b65b70a0654db870b9a2617c50a7f2c9a7a727687dd8e1e9.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 400, + 224, + 444, + 261 + ], + "blocks": [ + { + "bbox": [ + 400, + 224, + 444, + 261 + ], + "lines": [ + { + "bbox": [ + 400, + 224, + 444, + 261 + ], + "spans": [ + { + "bbox": [ + 400, + 224, + 444, + 261 + ], + "type": "image", + "image_path": "f57fbd9ffd01f21190facbf62662759bac7e341fb7bf692d83794e59d59daf9a.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 209, + 297, + 282, + 369 + ], + "blocks": [ + { + "bbox": [ + 209, + 297, + 282, + 369 + ], + "lines": [ + { + "bbox": [ + 209, + 297, + 282, + 369 + ], + "spans": [ + { + "bbox": [ + 209, + 297, + 282, + 369 + ], + "type": "image", + "image_path": "3a3c3dd6e00e5e5f63dcc443900b3048b1881233c93d46a9c26c0b87f2f99798.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "bbox": [ + 286, + 310, + 393, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 310, + 393, + 317 + ], + "spans": [ + { + "bbox": [ + 286, + 310, + 393, + 317 + ], + "type": "text", + "content": "How many colors are used for arrows in" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 286, + 319, + 319, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 319, + 319, + 327 + ], + "spans": [ + { + "bbox": [ + 286, + 319, + 319, + 327 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 286, + 329, + 338, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 329, + 338, + 337 + ], + "spans": [ + { + "bbox": [ + 286, + 329, + 338, + 337 + ], + "type": "text", + "content": "A:6 B:7" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 286, + 339, + 339, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 339, + 339, + 346 + ], + "spans": [ + { + "bbox": [ + 286, + 339, + 339, + 346 + ], + "type": "text", + "content": "C:8 D:9" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 286, + 349, + 306, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 349, + 306, + 356 + ], + "spans": [ + { + "bbox": [ + 286, + 349, + 306, + 356 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 166, + 392, + 210, + 436 + ], + "blocks": [ + { + "bbox": [ + 166, + 392, + 210, + 436 + ], + "lines": [ + { + "bbox": [ + 166, + 392, + 210, + 436 + ], + "spans": [ + { + "bbox": [ + 166, + 392, + 210, + 436 + ], + "type": "image", + "image_path": "b6d5282bc92abd52d6becf2f7340a6ae9ca1a48d6920ddddaa746fcf8782aa9f.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 212, + 392, + 257, + 436 + ], + "blocks": [ + { + "bbox": [ + 212, + 392, + 257, + 436 + ], + "lines": [ + { + "bbox": [ + 212, + 392, + 257, + 436 + ], + "spans": [ + { + "bbox": [ + 212, + 392, + 257, + 436 + ], + "type": "image", + "image_path": "4116f0b5b49af5a3cac51843675a4317a13142a281145e9039747c9e002e759a.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 259, + 392, + 304, + 436 + ], + "blocks": [ + { + "bbox": [ + 283, + 383, + 328, + 389 + ], + "lines": [ + { + "bbox": [ + 283, + 383, + 328, + 389 + ], + "spans": [ + { + "bbox": [ + 283, + 383, + 328, + 389 + ], + "type": "text", + "content": "Owen2.5-VL-3B" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 392, + 304, + 436 + ], + "lines": [ + { + "bbox": [ + 259, + 392, + 304, + 436 + ], + "spans": [ + { + "bbox": [ + 259, + 392, + 304, + 436 + ], + "type": "image", + "image_path": "43e38632a2ee3658648a88819e5fe95c13a28ae4333204b823dde3d1cd09cf97.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 307, + 392, + 351, + 436 + ], + "blocks": [ + { + "bbox": [ + 307, + 392, + 351, + 436 + ], + "lines": [ + { + "bbox": [ + 307, + 392, + 351, + 436 + ], + "spans": [ + { + "bbox": [ + 307, + 392, + 351, + 436 + ], + "type": "image", + "image_path": "585028e2d842e3528dba16b1de61dc399959caf042a242ea0841d7cb057a7e37.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 353, + 392, + 397, + 436 + ], + "blocks": [ + { + "bbox": [ + 353, + 392, + 397, + 436 + ], + "lines": [ + { + "bbox": [ + 353, + 392, + 397, + 436 + ], + "spans": [ + { + "bbox": [ + 353, + 392, + 397, + 436 + ], + "type": "image", + "image_path": "3af500d9cb45fba5c4a73861998a283c8a9cc70fb4cf8e372f7ca263f0feb27e.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 400, + 392, + 444, + 436 + ], + "blocks": [ + { + "bbox": [ + 400, + 392, + 444, + 436 + ], + "lines": [ + { + "bbox": [ + 400, + 392, + 444, + 436 + ], + "spans": [ + { + "bbox": [ + 400, + 392, + 444, + 436 + ], + "type": "image", + "image_path": "1951cf69fe3a3f287632b972067456bce819b93ec6831e1889e94c9101a2fe8f.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 166, + 452, + 210, + 495 + ], + "blocks": [ + { + "bbox": [ + 166, + 452, + 210, + 495 + ], + "lines": [ + { + "bbox": [ + 166, + 452, + 210, + 495 + ], + "spans": [ + { + "bbox": [ + 166, + 452, + 210, + 495 + ], + "type": "image", + "image_path": "a1f9a6f7c1bcbfdeee124bd440f0aa018fa48c6ce34f5c7f172fd96f97a49ed0.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 168, + 502, + 441, + 515 + ], + "lines": [ + { + "bbox": [ + 168, + 502, + 441, + 515 + ], + "spans": [ + { + "bbox": [ + 168, + 502, + 441, + 515 + ], + "type": "text", + "content": "Figure 21: Visualized Attention Maps for Color Counting Tasks." + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_caption" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 212, + 452, + 257, + 495 + ], + "blocks": [ + { + "bbox": [ + 212, + 452, + 257, + 495 + ], + "lines": [ + { + "bbox": [ + 212, + 452, + 257, + 495 + ], + "spans": [ + { + "bbox": [ + 212, + 452, + 257, + 495 + ], + "type": "image", + "image_path": "fdb4a842f5ab20016d34fb60569fa8554f488ee6c5170b4dd8d45b0dcbfa4292.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + } + ], + "index": 38 + }, + { + "type": "image", + "bbox": [ + 259, + 452, + 304, + 495 + ], + "blocks": [ + { + "bbox": [ + 283, + 443, + 328, + 450 + ], + "lines": [ + { + "bbox": [ + 283, + 443, + 328, + 450 + ], + "spans": [ + { + "bbox": [ + 283, + 443, + 328, + 450 + ], + "type": "text", + "content": "Qwen2.5-VL-7B" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 452, + 304, + 495 + ], + "lines": [ + { + "bbox": [ + 259, + 452, + 304, + 495 + ], + "spans": [ + { + "bbox": [ + 259, + 452, + 304, + 495 + ], + "type": "image", + "image_path": "13d7883fc7e827bcac012b1fb2ab964aaf7a3265f1198697e64b61ea9e81398d.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 307, + 452, + 351, + 495 + ], + "blocks": [ + { + "bbox": [ + 307, + 452, + 351, + 495 + ], + "lines": [ + { + "bbox": [ + 307, + 452, + 351, + 495 + ], + "spans": [ + { + "bbox": [ + 307, + 452, + 351, + 495 + ], + "type": "image", + "image_path": "59f5fe2516e44a500ab03863569ab00cc0d6016540860e0d0d57a00d8b095063.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "type": "image", + "bbox": [ + 353, + 452, + 397, + 495 + ], + "blocks": [ + { + "bbox": [ + 353, + 452, + 397, + 495 + ], + "lines": [ + { + "bbox": [ + 353, + 452, + 397, + 495 + ], + "spans": [ + { + "bbox": [ + 353, + 452, + 397, + 495 + ], + "type": "image", + "image_path": "d20c644c5d2b9fc3e5d5d54434acdbc990b2c09733bc998ace81a4f93d129a70.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 400, + 452, + 444, + 495 + ], + "blocks": [ + { + "bbox": [ + 400, + 452, + 444, + 495 + ], + "lines": [ + { + "bbox": [ + 400, + 452, + 444, + 495 + ], + "spans": [ + { + "bbox": [ + 400, + 452, + 444, + 495 + ], + "type": "image", + "image_path": "0fb181a5b57dfa3e33bae5354fe1fdf5fd0148050df7315097aac6c71965aae6.jpg" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_body" + } + ], + "index": 42 + }, + { + "type": "image", + "bbox": [ + 192, + 531, + 301, + 604 + ], + "blocks": [ + { + "bbox": [ + 192, + 531, + 301, + 604 + ], + "lines": [ + { + "bbox": [ + 192, + 531, + 301, + 604 + ], + "spans": [ + { + "bbox": [ + 192, + 531, + 301, + 604 + ], + "type": "image", + "image_path": "ac8abab7a75fa8fb34bc4f332ee1c8a10d0f8ec6dd527f634fd140320687390f.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + } + ], + "index": 44 + }, + { + "bbox": [ + 302, + 540, + 394, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 540, + 394, + 548 + ], + "spans": [ + { + "bbox": [ + 302, + 540, + 394, + 548 + ], + "type": "text", + "content": "How many gray animals are in this" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 302, + 550, + 324, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 550, + 324, + 557 + ], + "spans": [ + { + "bbox": [ + 302, + 550, + 324, + 557 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 302, + 559, + 313, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 559, + 313, + 567 + ], + "spans": [ + { + "bbox": [ + 302, + 559, + 313, + 567 + ], + "type": "text", + "content": "A:5" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 343, + 560, + 354, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 343, + 560, + 354, + 567 + ], + "spans": [ + { + "bbox": [ + 343, + 560, + 354, + 567 + ], + "type": "text", + "content": "B:6" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 302, + 569, + 313, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 569, + 313, + 576 + ], + "spans": [ + { + "bbox": [ + 302, + 569, + 313, + 576 + ], + "type": "text", + "content": "C:4" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 323, + 569, + 334, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 569, + 334, + 576 + ], + "spans": [ + { + "bbox": [ + 323, + 569, + 334, + 576 + ], + "type": "text", + "content": "D:3" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 342, + 569, + 353, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 342, + 569, + 353, + 576 + ], + "spans": [ + { + "bbox": [ + 342, + 569, + 353, + 576 + ], + "type": "text", + "content": "E:7" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 359, + 569, + 366, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 359, + 569, + 366, + 576 + ], + "spans": [ + { + "bbox": [ + 359, + 569, + 366, + 576 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 302, + 579, + 322, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 579, + 322, + 586 + ], + "spans": [ + { + "bbox": [ + 302, + 579, + 322, + 586 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 53 + }, + { + "type": "image", + "bbox": [ + 166, + 620, + 210, + 650 + ], + "blocks": [ + { + "bbox": [ + 166, + 620, + 210, + 650 + ], + "lines": [ + { + "bbox": [ + 166, + 620, + 210, + 650 + ], + "spans": [ + { + "bbox": [ + 166, + 620, + 210, + 650 + ], + "type": "image", + "image_path": "5b623d590d48725f8566e2b72e2d7732cdb7ff016844bd62d1289bd7e0fc9c50.jpg" + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_body" + } + ], + "index": 55 + }, + { + "type": "image", + "bbox": [ + 212, + 620, + 257, + 650 + ], + "blocks": [ + { + "bbox": [ + 212, + 620, + 257, + 650 + ], + "lines": [ + { + "bbox": [ + 212, + 620, + 257, + 650 + ], + "spans": [ + { + "bbox": [ + 212, + 620, + 257, + 650 + ], + "type": "image", + "image_path": "c679b7bb01346a8afdd10c2c55d4a037959775080db0aeda3194595a676bb15b.jpg" + } + ] + } + ], + "index": 56, + "angle": 0, + "type": "image_body" + } + ], + "index": 56 + }, + { + "type": "image", + "bbox": [ + 259, + 620, + 304, + 650 + ], + "blocks": [ + { + "bbox": [ + 283, + 610, + 328, + 618 + ], + "lines": [ + { + "bbox": [ + 283, + 610, + 328, + 618 + ], + "spans": [ + { + "bbox": [ + 283, + 610, + 328, + 618 + ], + "type": "text", + "content": "Qwen2.5-VL-3B" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 620, + 304, + 650 + ], + "lines": [ + { + "bbox": [ + 259, + 620, + 304, + 650 + ], + "spans": [ + { + "bbox": [ + 259, + 620, + 304, + 650 + ], + "type": "image", + "image_path": "d18d9f446eec8763b494d8efc0fdc2b1db35ca9af0a42f51df663670312291f1.jpg" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_body" + } + ], + "index": 57 + }, + { + "type": "image", + "bbox": [ + 307, + 620, + 351, + 650 + ], + "blocks": [ + { + "bbox": [ + 307, + 620, + 351, + 650 + ], + "lines": [ + { + "bbox": [ + 307, + 620, + 351, + 650 + ], + "spans": [ + { + "bbox": [ + 307, + 620, + 351, + 650 + ], + "type": "image", + "image_path": "6e51fde140ca697a915ea528fdd754f3797bb4a3669ea9d905dd543aa9136b99.jpg" + } + ] + } + ], + "index": 58, + "angle": 0, + "type": "image_body" + } + ], + "index": 58 + }, + { + "type": "image", + "bbox": [ + 353, + 620, + 397, + 650 + ], + "blocks": [ + { + "bbox": [ + 353, + 620, + 397, + 650 + ], + "lines": [ + { + "bbox": [ + 353, + 620, + 397, + 650 + ], + "spans": [ + { + "bbox": [ + 353, + 620, + 397, + 650 + ], + "type": "image", + "image_path": "413e8e196f43aef374359190442749dbc2b48bf22c997bb2562083749e9cda77.jpg" + } + ] + } + ], + "index": 59, + "angle": 0, + "type": "image_body" + } + ], + "index": 59 + }, + { + "type": "image", + "bbox": [ + 400, + 620, + 444, + 650 + ], + "blocks": [ + { + "bbox": [ + 400, + 620, + 444, + 650 + ], + "lines": [ + { + "bbox": [ + 400, + 620, + 444, + 650 + ], + "spans": [ + { + "bbox": [ + 400, + 620, + 444, + 650 + ], + "type": "image", + "image_path": "10ac1e7d129b832af82db614f4a21768f8dc6b3aaf75c45d9f27061e7678b206.jpg" + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_body" + } + ], + "index": 60 + }, + { + "type": "image", + "bbox": [ + 166, + 667, + 210, + 696 + ], + "blocks": [ + { + "bbox": [ + 166, + 667, + 210, + 696 + ], + "lines": [ + { + "bbox": [ + 166, + 667, + 210, + 696 + ], + "spans": [ + { + "bbox": [ + 166, + 667, + 210, + 696 + ], + "type": "image", + "image_path": "01a88f419c52c026af431dd8e0219bc5c86fdaa4868c47c7885cf0e104b5b252.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 165, + 703, + 443, + 715 + ], + "lines": [ + { + "bbox": [ + 165, + 703, + 443, + 715 + ], + "spans": [ + { + "bbox": [ + 165, + 703, + 443, + 715 + ], + "type": "text", + "content": "Figure 22: Visualized Attention Maps for Object Counting Tasks." + } + ] + } + ], + "index": 68, + "angle": 0, + "type": "image_caption" + } + ], + "index": 62 + }, + { + "type": "image", + "bbox": [ + 212, + 667, + 257, + 696 + ], + "blocks": [ + { + "bbox": [ + 212, + 667, + 257, + 696 + ], + "lines": [ + { + "bbox": [ + 212, + 667, + 257, + 696 + ], + "spans": [ + { + "bbox": [ + 212, + 667, + 257, + 696 + ], + "type": "image", + "image_path": "bcd00c318f7f3748f7ddd8f40bb7f11ac253fa5d7594515bdcf550074b42b214.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + } + ], + "index": 63 + }, + { + "type": "image", + "bbox": [ + 259, + 667, + 304, + 696 + ], + "blocks": [ + { + "bbox": [ + 283, + 658, + 328, + 665 + ], + "lines": [ + { + "bbox": [ + 283, + 658, + 328, + 665 + ], + "spans": [ + { + "bbox": [ + 283, + 658, + 328, + 665 + ], + "type": "text", + "content": "Qwen2.5-VL-7B" + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 259, + 667, + 304, + 696 + ], + "lines": [ + { + "bbox": [ + 259, + 667, + 304, + 696 + ], + "spans": [ + { + "bbox": [ + 259, + 667, + 304, + 696 + ], + "type": "image", + "image_path": "e1521cc88cda5b7132e19a9b6e08e1b236abd7de6b389882cd8d89ff8cd71f0c.jpg" + } + ] + } + ], + "index": 64, + "angle": 0, + "type": "image_body" + } + ], + "index": 64 + }, + { + "type": "image", + "bbox": [ + 307, + 667, + 351, + 696 + ], + "blocks": [ + { + "bbox": [ + 307, + 667, + 351, + 696 + ], + "lines": [ + { + "bbox": [ + 307, + 667, + 351, + 696 + ], + "spans": [ + { + "bbox": [ + 307, + 667, + 351, + 696 + ], + "type": "image", + "image_path": "ba26ce37a543827ab018fbb1147492ec152fee662a1e935170eefb74cfd6916a.jpg" + } + ] + } + ], + "index": 65, + "angle": 0, + "type": "image_body" + } + ], + "index": 65 + }, + { + "type": "image", + "bbox": [ + 353, + 667, + 397, + 696 + ], + "blocks": [ + { + "bbox": [ + 353, + 667, + 397, + 696 + ], + "lines": [ + { + "bbox": [ + 353, + 667, + 397, + 696 + ], + "spans": [ + { + "bbox": [ + 353, + 667, + 397, + 696 + ], + "type": "image", + "image_path": "9645212959a5659a2b2b5517bde0fd806c561ee2ecbde8e706131d02d7602ead.jpg" + } + ] + } + ], + "index": 66, + "angle": 0, + "type": "image_body" + } + ], + "index": 66 + }, + { + "type": "image", + "bbox": [ + 400, + 667, + 444, + 696 + ], + "blocks": [ + { + "bbox": [ + 400, + 667, + 444, + 696 + ], + "lines": [ + { + "bbox": [ + 400, + 667, + 444, + 696 + ], + "spans": [ + { + "bbox": [ + 400, + 667, + 444, + 696 + ], + "type": "image", + "image_path": "84305a9086c242e1766b052b273d35d1f49d0530e1e427bc362698befb29a401.jpg" + } + ] + } + ], + "index": 67, + "angle": 0, + "type": "image_body" + } + ], + "index": 67 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 69 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 166, + 90, + 321, + 133 + ], + "blocks": [ + { + "bbox": [ + 166, + 90, + 321, + 133 + ], + "lines": [ + { + "bbox": [ + 166, + 90, + 321, + 133 + ], + "spans": [ + { + "bbox": [ + 166, + 90, + 321, + 133 + ], + "type": "image", + "image_path": "abc6371b7e79ce4293c09cde16fd2c34c1ee6af182d6a212a1eea8c3fd220603.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 328, + 90, + 439, + 109 + ], + "lines": [ + { + "bbox": [ + 328, + 90, + 439, + 109 + ], + "spans": [ + { + "bbox": [ + 328, + 90, + 439, + 109 + ], + "type": "text", + "content": "Which circles has the darkest color? The circles are numbered left to right starting" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 328, + 110, + 350, + 118 + ], + "lines": [ + { + "bbox": [ + 328, + 110, + 350, + 118 + ], + "spans": [ + { + "bbox": [ + 328, + 110, + 350, + 118 + ], + "type": "text", + "content": "from 1." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 328, + 120, + 369, + 128 + ], + "lines": [ + { + "bbox": [ + 328, + 120, + 369, + 128 + ], + "spans": [ + { + "bbox": [ + 328, + 120, + 369, + 128 + ], + "type": "text", + "content": "A: All the same" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 328, + 130, + 400, + 137 + ], + "lines": [ + { + "bbox": [ + 328, + 130, + 400, + 137 + ], + "spans": [ + { + "bbox": [ + 328, + 130, + 400, + 137 + ], + "type": "text", + "content": "C:2 D:3" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 166, + 159, + 445, + 217 + ], + "blocks": [ + { + "bbox": [ + 388, + 120, + 400, + 127 + ], + "lines": [ + { + "bbox": [ + 388, + 120, + 400, + 127 + ], + "spans": [ + { + "bbox": [ + 388, + 120, + 400, + 127 + ], + "type": "text", + "content": "B:1" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 328, + 140, + 350, + 147 + ], + "lines": [ + { + "bbox": [ + 328, + 140, + 350, + 147 + ], + "spans": [ + { + "bbox": [ + 328, + 140, + 350, + 147 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 166, + 159, + 445, + 217 + ], + "lines": [ + { + "bbox": [ + 166, + 159, + 445, + 217 + ], + "spans": [ + { + "bbox": [ + 166, + 159, + 445, + 217 + ], + "type": "image", + "image_path": "2db69e23d144bf7a5e7712fc4b21a7ae5f301356cf2cdbcebb6681262bee666d.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 171, + 223, + 439, + 236 + ], + "lines": [ + { + "bbox": [ + 171, + 223, + 439, + 236 + ], + "spans": [ + { + "bbox": [ + 171, + 223, + 439, + 236 + ], + "type": "text", + "content": "Figure 23: Visualized Attention Maps for Color Illusion Tasks." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 182, + 272, + 310, + 345 + ], + "blocks": [ + { + "bbox": [ + 182, + 272, + 310, + 345 + ], + "lines": [ + { + "bbox": [ + 182, + 272, + 310, + 345 + ], + "spans": [ + { + "bbox": [ + 182, + 272, + 310, + 345 + ], + "type": "image", + "image_path": "d7e6c7ad93864c2526094df0ff56240f5074c112d0eb2ab765f3a03b33ce042c.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 312, + 279, + 406, + 297 + ], + "lines": [ + { + "bbox": [ + 312, + 279, + 406, + 297 + ], + "spans": [ + { + "bbox": [ + 312, + 279, + 406, + 297 + ], + "type": "text", + "content": "How many black sea snakes in this images?" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 312, + 299, + 364, + 307 + ], + "lines": [ + { + "bbox": [ + 312, + 299, + 364, + 307 + ], + "spans": [ + { + "bbox": [ + 312, + 299, + 364, + 307 + ], + "type": "text", + "content": "A:0 B:1" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 312, + 308, + 367, + 316 + ], + "lines": [ + { + "bbox": [ + 312, + 308, + 367, + 316 + ], + "spans": [ + { + "bbox": [ + 312, + 308, + 367, + 316 + ], + "type": "text", + "content": "C:2 D:3" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 313, + 319, + 334, + 326 + ], + "lines": [ + { + "bbox": [ + 313, + 319, + 334, + 326 + ], + "spans": [ + { + "bbox": [ + 313, + 319, + 334, + 326 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 166, + 347, + 445, + 434 + ], + "blocks": [ + { + "bbox": [ + 166, + 347, + 445, + 434 + ], + "lines": [ + { + "bbox": [ + 166, + 347, + 445, + 434 + ], + "spans": [ + { + "bbox": [ + 166, + 347, + 445, + 434 + ], + "type": "image", + "image_path": "d6504c1ad7498e6665534d719eb3b9f61dd679660f6f92c13ebc02cdb8da3bb5.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 169, + 438, + 441, + 451 + ], + "lines": [ + { + "bbox": [ + 169, + 438, + 441, + 451 + ], + "spans": [ + { + "bbox": [ + 169, + 438, + 441, + 451 + ], + "type": "text", + "content": "Figure 24: Visualized Attention Maps for Color Mimicry Tasks." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 210, + 489, + 279, + 559 + ], + "blocks": [ + { + "bbox": [ + 210, + 489, + 279, + 559 + ], + "lines": [ + { + "bbox": [ + 210, + 489, + 279, + 559 + ], + "spans": [ + { + "bbox": [ + 210, + 489, + 279, + 559 + ], + "type": "image", + "image_path": "e84813dd6436f2be3c2a5b1c9a618ed87b435b246a9f271093bc9aa695cd3f28.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 286, + 500, + 392, + 508 + ], + "lines": [ + { + "bbox": [ + 286, + 500, + 392, + 508 + ], + "spans": [ + { + "bbox": [ + 286, + 500, + 392, + 508 + ], + "type": "text", + "content": "What is the number in the center of this" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 510, + 308, + 518 + ], + "lines": [ + { + "bbox": [ + 286, + 510, + 308, + 518 + ], + "spans": [ + { + "bbox": [ + 286, + 510, + 308, + 518 + ], + "type": "text", + "content": "image?" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 286, + 521, + 339, + 528 + ], + "lines": [ + { + "bbox": [ + 286, + 521, + 339, + 528 + ], + "spans": [ + { + "bbox": [ + 286, + 521, + 339, + 528 + ], + "type": "text", + "content": "A:4 B:7" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 286, + 530, + 342, + 537 + ], + "lines": [ + { + "bbox": [ + 286, + 530, + 342, + 537 + ], + "spans": [ + { + "bbox": [ + 286, + 530, + 342, + 537 + ], + "type": "text", + "content": "C:18 D:22" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 286, + 540, + 306, + 548 + ], + "lines": [ + { + "bbox": [ + 286, + 540, + 306, + 548 + ], + "spans": [ + { + "bbox": [ + 286, + 540, + 306, + 548 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 166, + 567, + 445, + 685 + ], + "blocks": [ + { + "bbox": [ + 166, + 567, + 445, + 685 + ], + "lines": [ + { + "bbox": [ + 166, + 567, + 445, + 685 + ], + "spans": [ + { + "bbox": [ + 166, + 567, + 445, + 685 + ], + "type": "image", + "image_path": "de903f7ef6d2cd449ffbc8b99d7a07e385b6515dbe6f5eb135f50dc9800c77d1.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 167, + 692, + 443, + 704 + ], + "lines": [ + { + "bbox": [ + 167, + 692, + 443, + 704 + ], + "spans": [ + { + "bbox": [ + 167, + 692, + 443, + 704 + ], + "type": "text", + "content": "Figure 25: Visualized Attention Maps for Color Blindness Tasks." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 71, + 276, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 71, + 276, + 85 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 276, + 85 + ], + "type": "text", + "content": "J Effect of Different Modalities" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 95, + 506, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 95, + 506, + 150 + ], + "spans": [ + { + "bbox": [ + 104, + 95, + 506, + 150 + ], + "type": "text", + "content": "To investigate the impact of color information, we compare model performance on RGB versus grayscale images, thereby isolating the role of color within the image modality. To further explore the contribution of the image modality, we also conduct experiments using textual input only (questions and answer choices), where the original input images are substituted with pure black images of identical dimensions." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 106, + 190, + 504, + 401 + ], + "blocks": [ + { + "bbox": [ + 104, + 163, + 505, + 186 + ], + "lines": [ + { + "bbox": [ + 104, + 163, + 505, + 186 + ], + "spans": [ + { + "bbox": [ + 104, + 163, + 505, + 186 + ], + "type": "text", + "content": "Table 9: Average Accuracy (\\%) across three input settings (Text-only, Grayscale+Text, RGB+Text) on Color Perception and Reasoning tasks." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 190, + 504, + 401 + ], + "lines": [ + { + "bbox": [ + 106, + 190, + 504, + 401 + ], + "spans": [ + { + "bbox": [ + 106, + 190, + 504, + 401 + ], + "type": "table", + "html": "
Color PerceptionColor ReasoningP & R
C'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
VLMs: < 7B
Text-only29.230.631.629.635.324.520.635.541.723.429.3
Gray+Text25.933.542.729.137.123.223.342.453.723.032.1
RGB+Text55.335.763.637.342.422.526.137.550.625.037.4
VLMs: 7B - 8B
Text-only23.735.432.320.629.718.419.336.736.921.126.7
Gray+Text25.235.746.027.841.322.227.548.258.723.634.2
RGB+Text60.442.473.041.849.122.732.741.550.023.441.1
VLMs: 10B - 30B
Text-only26.933.632.825.034.726.522.338.240.018.928.9
Gray+Text26.837.946.822.546.522.430.143.060.326.035.0
RGB+Text68.441.579.743.051.325.334.433.855.426.643.2
VLMs: 30B - 70B
Text-only28.936.531.816.329.015.416.342.733.615.925.6
Gray+Text28.742.151.226.349.924.325.648.865.122.736.7
RGB+Text73.448.881.649.555.224.737.336.161.125.546.2
VLMs: > 70B
Text-only26.047.435.720.936.921.624.035.833.921.829.8
Gray+Text25.340.954.625.351.021.828.644.654.326.136.1
RGB+Text73.454.782.545.662.426.739.633.953.929.647.6
", + "image_path": "d60ff358df2811d8830a0caebeed2f35e40a50d32131cd91bafe0c4f1c943739.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 411, + 504, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 411, + 504, + 456 + ], + "spans": [ + { + "bbox": [ + 104, + 411, + 504, + 456 + ], + "type": "text", + "content": "Table 9 presents the average accuracy across models grouped by LLM size. The result demonstrates that removing the visual modality (text-only setting) leads to the lowest performance across the majority of tasks. The performance differences among the three input settings allow us to disentangle the impact of textual input, image context (excluding color), and color information itself." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 460, + 504, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 460, + 504, + 506 + ], + "spans": [ + { + "bbox": [ + 104, + 460, + 504, + 506 + ], + "type": "text", + "content": "Notably, in tasks such as Color Recognition and Object Recognition, the performance gap between text-only and grayscale experiments is relatively small, whereas both are significantly outperformed by the RGB input setting. This suggests that color cues play a substantially more important role than either contextual visual or textual information in these tasks." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 520, + 338, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 520, + 338, + 534 + ], + "spans": [ + { + "bbox": [ + 105, + 520, + 338, + 534 + ], + "type": "text", + "content": "K Fine-tuning Experiments on ColorBench" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 544, + 506, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 544, + 506, + 589 + ], + "spans": [ + { + "bbox": [ + 104, + 544, + 506, + 589 + ], + "type": "text", + "content": "We conduct a series of fine-tuning experiments to investigate model adaptation on specialized color-centric tasks. These experiments leverage three synthetic datasets designed for Color Extraction, Color Illusion, and Color Blindness. Using our synthetic data generation pipeline, we curate dedicated training sets for this purpose, with sample counts summarized in Table 10." + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 225, + 619, + 383, + 671 + ], + "blocks": [ + { + "bbox": [ + 141, + 604, + 467, + 616 + ], + "lines": [ + { + "bbox": [ + 141, + 604, + 467, + 616 + ], + "spans": [ + { + "bbox": [ + 141, + 604, + 467, + 616 + ], + "type": "text", + "content": "Table 10: Number of synthetic samples generated for fine-tuning experiments." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 225, + 619, + 383, + 671 + ], + "lines": [ + { + "bbox": [ + 225, + 619, + 383, + 671 + ], + "spans": [ + { + "bbox": [ + 225, + 619, + 383, + 671 + ], + "type": "table", + "html": "
TaskNumber of Samples
Color Extraction2400
Color Illusion2400
Color Blindness2280
", + "image_path": "5170edb4da81e1095363d9d239e153782c4a4ddd277014be36ab7a1d76040d6a.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 680, + 504, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 680, + 504, + 703 + ], + "spans": [ + { + "bbox": [ + 104, + 680, + 504, + 703 + ], + "type": "text", + "content": "To systematically assess the influence of different model components, we perform a comprehensive ablation study on Qwen2.5-VL-3B and Qwen2.5-VL-7B with the following settings:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 711, + 186, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 711, + 186, + 723 + ], + "spans": [ + { + "bbox": [ + 132, + 711, + 186, + 723 + ], + "type": "text", + "content": "- MLP only" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 132, + 72, + 335, + 216 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 132, + 72, + 223, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 72, + 223, + 83 + ], + "spans": [ + { + "bbox": [ + 132, + 72, + 223, + 83 + ], + "type": "text", + "content": "- Vision encoder only" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 132, + 99, + 270, + 110 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 99, + 270, + 110 + ], + "spans": [ + { + "bbox": [ + 132, + 99, + 270, + 110 + ], + "type": "text", + "content": "- MLP + Vision encoder (jointly)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 132, + 125, + 219, + 136 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 125, + 219, + 136 + ], + "spans": [ + { + "bbox": [ + 132, + 125, + 219, + 136 + ], + "type": "text", + "content": "- LLM (LoRA) only" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 151, + 230, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 151, + 230, + 162 + ], + "spans": [ + { + "bbox": [ + 132, + 151, + 230, + 162 + ], + "type": "text", + "content": "- LLM (LoRA) + MLP" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 132, + 177, + 270, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 177, + 270, + 188 + ], + "spans": [ + { + "bbox": [ + 132, + 177, + 270, + 188 + ], + "type": "text", + "content": "- LLM (LoRA) + Vision encoder" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 132, + 203, + 335, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 203, + 335, + 216 + ], + "spans": [ + { + "bbox": [ + 132, + 203, + 335, + 216 + ], + "type": "text", + "content": "- LLM (LoRA) + MLP + Vision encoder (jointly)" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 233, + 504, + 256 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 233, + 504, + 256 + ], + "spans": [ + { + "bbox": [ + 104, + 233, + 504, + 256 + ], + "type": "text", + "content": "For configurations involving the LLM, we adopt the LoRA approach to update a subset of its parameters, while the remaining modules are fully fine-tuned." + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 107, + 312, + 504, + 432 + ], + "blocks": [ + { + "bbox": [ + 104, + 285, + 504, + 308 + ], + "lines": [ + { + "bbox": [ + 104, + 285, + 504, + 308 + ], + "spans": [ + { + "bbox": [ + 104, + 285, + 504, + 308 + ], + "type": "text", + "content": "Table 11: Accuracy (%) of Qwen2.5-VL (3B and 7B) under different training strategies across ColorBench tasks. Bold numbers indicate the best results within each model group." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 312, + 504, + 432 + ], + "lines": [ + { + "bbox": [ + 107, + 312, + 504, + 432 + ], + "spans": [ + { + "bbox": [ + 107, + 312, + 504, + 432 + ], + "type": "table", + "html": "
ModelTrainable ModulesColor PerceptionColor ReasoningP&R
LLM (LoRA)MLPVisionC'RecogC'ExtractO'RecogC'PropC'CompC'CountO'CountC'IlluC'MimicC'BlindOverall
Qwen2.5-3B72.438.574.043.848.522.625.243.045.724.241.1
71.153.175.350.049.522.526.245.244.325.543.6
73.753.179.246.345.529.427.248.447.125.544.4
75.056.375.347.549.528.425.246.247.128.045.2
71.175.070.145.051.526.527.245.247.127.446.2
69.777.174.040.053.523.532.051.645.737.648.8
71.175.071.446.349.525.527.249.448.631.446.7
72.475.071.445.051.524.332.046.250.028.047.1
Qwen2.5-7B76.349.084.447.552.519.634.044.155.728.746.2
72.442.784.442.559.420.629.145.247.128.745.2
77.659.481.847.556.425.529.151.650.035.651.2
78.961.580.541.355.420.629.147.348.630.147.7
75.078.183.151.360.421.635.052.754.335.652.4
72.482.383.151.357.419.630.151.652.933.151.2
75.083.383.145.056.415.730.153.854.333.151.5
77.682.383.150.055.523.331.152.755.733.151.7
", + "image_path": "9b96657fefa1d52defb48a32a8eb92da5620c7813c002852c292ef28b297a613.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 455, + 504, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 455, + 504, + 521 + ], + "spans": [ + { + "bbox": [ + 104, + 455, + 504, + 521 + ], + "type": "text", + "content": "The evaluation results with finetuned VLMs are shown in Table 11. Overall, models that include LoRA fine-tuning on the LLM component consistently outperform those without it, exhibiting a substantial improvement in overall accuracy. Importantly, the improvements are not confined to the directly targeted tasks (Color Extraction, Color Illusion, Color Blindness). These experiments show that fine-tuning the model on part of tasks also produces notable gains on some ancillary reasoning tasks, including Color Proportion, and Color Comparison." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 526, + 506, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 526, + 506, + 583 + ], + "spans": [ + { + "bbox": [ + 104, + 526, + 506, + 583 + ], + "type": "text", + "content": "However, the transfer of knowledge is not universally positive. Certain tasks demonstrated limited or even negative performance transfer, indicating that fine-tuning exclusively on specialized color objectives does not guarantee generalization across the full spectrum of color perception and reasoning. This finding underscores that while targeted training enhances specialized abilities, a balanced and robust performance profile necessitates the inclusion of more diverse data and training objectives." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 609, + 230, + 621 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 609, + 230, + 621 + ], + "spans": [ + { + "bbox": [ + 104, + 609, + 230, + 621 + ], + "type": "text", + "content": "L More Visualizations" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 641, + 335, + 653 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 641, + 335, + 653 + ], + "spans": [ + { + "bbox": [ + 104, + 641, + 335, + 653 + ], + "type": "text", + "content": "L.1 VLM Size & Model Performance for Each Task" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "content": "Figure 26 to 35 present detailed correlations between the log-scaled sizes of VLM parameters and the performance metrics for each task of Perception and Reasoning Categories. Deeper color represents higher accuracy. Each line represents a model family with the sizes growing from small to large. This visualization clearly shows the correlation between performances and model sizes, larger model leads to higher performance." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 69, + 298, + 186 + ], + "blocks": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "lines": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "spans": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "type": "image", + "image_path": "3f83ce7e7e71f790f9e093962ea0933eb8a6757a7402ab480ba182d30d352441.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 110, + 190, + 294, + 204 + ], + "lines": [ + { + "bbox": [ + 110, + 190, + 294, + 204 + ], + "spans": [ + { + "bbox": [ + 110, + 190, + 294, + 204 + ], + "type": "text", + "content": "Figure 26: Heatmap for Color Recognition." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 312, + 69, + 504, + 186 + ], + "blocks": [ + { + "bbox": [ + 312, + 69, + 504, + 186 + ], + "lines": [ + { + "bbox": [ + 312, + 69, + 504, + 186 + ], + "spans": [ + { + "bbox": [ + 312, + 69, + 504, + 186 + ], + "type": "image", + "image_path": "6429a0ce7abb3003695d788a3416e20bd3119f6c0ebaf408e56e6793e79d84ce.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 318, + 190, + 497, + 204 + ], + "lines": [ + { + "bbox": [ + 318, + 190, + 497, + 204 + ], + "spans": [ + { + "bbox": [ + 318, + 190, + 497, + 204 + ], + "type": "text", + "content": "Figure 27: Heatmap for Color Extraction." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 106, + 241, + 298, + 357 + ], + "blocks": [ + { + "bbox": [ + 106, + 241, + 298, + 357 + ], + "lines": [ + { + "bbox": [ + 106, + 241, + 298, + 357 + ], + "spans": [ + { + "bbox": [ + 106, + 241, + 298, + 357 + ], + "type": "image", + "image_path": "84700c8bb9290b42ef38b3914ddeff9007792b24517af4aa1f668cec87cd67a6.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 107, + 363, + 296, + 376 + ], + "lines": [ + { + "bbox": [ + 107, + 363, + 296, + 376 + ], + "spans": [ + { + "bbox": [ + 107, + 363, + 296, + 376 + ], + "type": "text", + "content": "Figure 28: Heatmap for Object Recognition." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 312, + 241, + 504, + 357 + ], + "blocks": [ + { + "bbox": [ + 312, + 241, + 504, + 357 + ], + "lines": [ + { + "bbox": [ + 312, + 241, + 504, + 357 + ], + "spans": [ + { + "bbox": [ + 312, + 241, + 504, + 357 + ], + "type": "image", + "image_path": "7a9b92c734e7a87edf87a504d18d2aa342a3d80761632aa41c1e7ff012e61126.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 318, + 363, + 498, + 376 + ], + "lines": [ + { + "bbox": [ + 318, + 363, + 498, + 376 + ], + "spans": [ + { + "bbox": [ + 318, + 363, + 498, + 376 + ], + "type": "text", + "content": "Figure 29: Heatmap for Color Proportion." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 106, + 413, + 298, + 530 + ], + "blocks": [ + { + "bbox": [ + 106, + 413, + 298, + 530 + ], + "lines": [ + { + "bbox": [ + 106, + 413, + 298, + 530 + ], + "spans": [ + { + "bbox": [ + 106, + 413, + 298, + 530 + ], + "type": "image", + "image_path": "53ab8e5968fb097f710c7eea5c3a96eeca54b112f621172f634379c04871c70f.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 109, + 535, + 294, + 548 + ], + "lines": [ + { + "bbox": [ + 109, + 535, + 294, + 548 + ], + "spans": [ + { + "bbox": [ + 109, + 535, + 294, + 548 + ], + "type": "text", + "content": "Figure 30: Heatmap for Color Comparison." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 312, + 413, + 504, + 530 + ], + "blocks": [ + { + "bbox": [ + 312, + 413, + 504, + 530 + ], + "lines": [ + { + "bbox": [ + 312, + 413, + 504, + 530 + ], + "spans": [ + { + "bbox": [ + 312, + 413, + 504, + 530 + ], + "type": "image", + "image_path": "7118bdafa8a32f23b2a2cdd87b2e0125f791fe1d4009abdb46d541f63544ac6b.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 321, + 535, + 494, + 548 + ], + "lines": [ + { + "bbox": [ + 321, + 535, + 494, + 548 + ], + "spans": [ + { + "bbox": [ + 321, + 535, + 494, + 548 + ], + "type": "text", + "content": "Figure 31: Heatmap for Color Counting." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 106, + 586, + 297, + 701 + ], + "blocks": [ + { + "bbox": [ + 106, + 586, + 297, + 701 + ], + "lines": [ + { + "bbox": [ + 106, + 586, + 297, + 701 + ], + "spans": [ + { + "bbox": [ + 106, + 586, + 297, + 701 + ], + "type": "image", + "image_path": "f5414f6db50b112cf0f92e69eacd6f077ea8fc62a22e614a4eb4b1939837c066.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 113, + 707, + 291, + 719 + ], + "lines": [ + { + "bbox": [ + 113, + 707, + 291, + 719 + ], + "spans": [ + { + "bbox": [ + 113, + 707, + 291, + 719 + ], + "type": "text", + "content": "Figure 32: Heatmap for Object Counting." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 312, + 586, + 504, + 701 + ], + "blocks": [ + { + "bbox": [ + 312, + 586, + 504, + 701 + ], + "lines": [ + { + "bbox": [ + 312, + 586, + 504, + 701 + ], + "spans": [ + { + "bbox": [ + 312, + 586, + 504, + 701 + ], + "type": "image", + "image_path": "e2e444cfa3527af494883e988cd0abd80b558f1d182bb536ebe8e991e6a0f6ad.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 324, + 707, + 490, + 719 + ], + "lines": [ + { + "bbox": [ + 324, + 707, + 490, + 719 + ], + "spans": [ + { + "bbox": [ + 324, + 707, + 490, + 719 + ], + "type": "text", + "content": "Figure 33: Heatmap for Color Illusion." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 69, + 298, + 186 + ], + "blocks": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "lines": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "spans": [ + { + "bbox": [ + 106, + 69, + 298, + 186 + ], + "type": "image", + "image_path": "ab15d66389f875f3cc3c3133c3751eee7abe2446e0446e125cbf82ed3d4036d8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 116, + 191, + 287, + 204 + ], + "lines": [ + { + "bbox": [ + 116, + 191, + 287, + 204 + ], + "spans": [ + { + "bbox": [ + 116, + 191, + 287, + 204 + ], + "type": "text", + "content": "Figure 34: Heatmap for Color Mimicry." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 311, + 69, + 504, + 186 + ], + "blocks": [ + { + "bbox": [ + 311, + 69, + 504, + 186 + ], + "lines": [ + { + "bbox": [ + 311, + 69, + 504, + 186 + ], + "spans": [ + { + "bbox": [ + 311, + 69, + 504, + 186 + ], + "type": "image", + "image_path": "4ae0e07916db79850cc8634953680899bd58e1ba441b286aa0600a40cd4334a7.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 320, + 191, + 495, + 204 + ], + "lines": [ + { + "bbox": [ + 320, + 191, + 495, + 204 + ], + "spans": [ + { + "bbox": [ + 320, + 191, + 495, + 204 + ], + "type": "text", + "content": "Figure 35: Heatmap for Color Blindness." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 267, + 338, + 278 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 267, + 338, + 278 + ], + "spans": [ + { + "bbox": [ + 104, + 267, + 338, + 278 + ], + "type": "text", + "content": "L.2 Vision Size & Model Performance for Each Task" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 307, + 504, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 307, + 504, + 374 + ], + "spans": [ + { + "bbox": [ + 104, + 307, + 504, + 374 + ], + "type": "text", + "content": "Figure 36 to 40 show detailed correlations between the log-scaled sizes of vision encoders and the performance metrics for each task of Perception and Reasoning Categories. Colors represent different model families. Models that have the same vision encoder sizes but with different LLM sizes are plotted as different points. Given that the majority of Vision-Language Models (VLMs) utilize a singular type of vision encoder, and that the sizes of these encoders generally range between 300-400M, it becomes challenging to assess the scaling effects within vision encoders." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 105, + 428, + 298, + 544 + ], + "blocks": [ + { + "bbox": [ + 105, + 428, + 298, + 544 + ], + "lines": [ + { + "bbox": [ + 105, + 428, + 298, + 544 + ], + "spans": [ + { + "bbox": [ + 105, + 428, + 298, + 544 + ], + "type": "image", + "image_path": "25f940ec0eb0925581bae443b2c3aae4a1fb1ea2333c422de4b697e54d207c5b.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 549, + 299, + 571 + ], + "lines": [ + { + "bbox": [ + 105, + 549, + 299, + 571 + ], + "spans": [ + { + "bbox": [ + 105, + 549, + 299, + 571 + ], + "type": "text", + "content": "Figure 36: The scatter plot for Color Recognition and Color Extraction." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 310, + 428, + 503, + 544 + ], + "blocks": [ + { + "bbox": [ + 310, + 428, + 503, + 544 + ], + "lines": [ + { + "bbox": [ + 310, + 428, + 503, + 544 + ], + "spans": [ + { + "bbox": [ + 310, + 428, + 503, + 544 + ], + "type": "image", + "image_path": "45a2901fcb11ba711d9bd570c3bbde21465db2de5ac780ff5d52b54ec7a41ff9.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 548, + 505, + 571 + ], + "lines": [ + { + "bbox": [ + 310, + 548, + 505, + 571 + ], + "spans": [ + { + "bbox": [ + 310, + 548, + 505, + 571 + ], + "type": "text", + "content": "Figure 37: The scatter plot for Object Recognition and Color Proportion." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 105, + 575, + 298, + 691 + ], + "blocks": [ + { + "bbox": [ + 105, + 575, + 298, + 691 + ], + "lines": [ + { + "bbox": [ + 105, + 575, + 298, + 691 + ], + "spans": [ + { + "bbox": [ + 105, + 575, + 298, + 691 + ], + "type": "image", + "image_path": "a1b41d1272bee26b3739b7e4f2f30fcda33192cafbc666b06df4ea1ddcab1b33.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 696, + 299, + 719 + ], + "lines": [ + { + "bbox": [ + 105, + 696, + 299, + 719 + ], + "spans": [ + { + "bbox": [ + 105, + 696, + 299, + 719 + ], + "type": "text", + "content": "Figure 38: The scatter plot for Color Comparison and Color Counting." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 310, + 575, + 503, + 691 + ], + "blocks": [ + { + "bbox": [ + 310, + 575, + 503, + 691 + ], + "lines": [ + { + "bbox": [ + 310, + 575, + 503, + 691 + ], + "spans": [ + { + "bbox": [ + 310, + 575, + 503, + 691 + ], + "type": "image", + "image_path": "c05e6ccf8b74e62f9ce387d772203df9eef31941b4a941aeec61de9694a48bd6.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 696, + 505, + 719 + ], + "lines": [ + { + "bbox": [ + 310, + 696, + 505, + 719 + ], + "spans": [ + { + "bbox": [ + 310, + 696, + 505, + 719 + ], + "type": "text", + "content": "Figure 39: The scatter plot for Object Counting and Color Illusion." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 105, + 68, + 299, + 186 + ], + "blocks": [ + { + "bbox": [ + 105, + 68, + 299, + 186 + ], + "lines": [ + { + "bbox": [ + 105, + 68, + 299, + 186 + ], + "spans": [ + { + "bbox": [ + 105, + 68, + 299, + 186 + ], + "type": "image", + "image_path": "b0e9755c8746794e00271b97f98ea952445567fabab20510299d4a93e0b7a407.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 190, + 299, + 212 + ], + "lines": [ + { + "bbox": [ + 105, + 190, + 299, + 212 + ], + "spans": [ + { + "bbox": [ + 105, + 190, + 299, + 212 + ], + "type": "text", + "content": "Figure 40: The scatter plot for Color Mimicry and Color Blindness." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 277, + 348, + 289 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 277, + 348, + 289 + ], + "spans": [ + { + "bbox": [ + 104, + 277, + 348, + 289 + ], + "type": "text", + "content": "L.3 Performance for Each Model Family on Each Task" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 316, + 506, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 316, + 506, + 340 + ], + "spans": [ + { + "bbox": [ + 104, + 316, + 506, + 340 + ], + "type": "text", + "content": "Figures 41 to 47 illustrate task performance across different models within the same model families. In general, models with more parameters tend to perform better on the majority of tasks." + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 106, + 342, + 268, + 499 + ], + "blocks": [ + { + "bbox": [ + 106, + 342, + 268, + 499 + ], + "lines": [ + { + "bbox": [ + 106, + 342, + 268, + 499 + ], + "spans": [ + { + "bbox": [ + 106, + 342, + 268, + 499 + ], + "type": "image", + "image_path": "15517e3c9e23e1341c37406ca32c66703ceeb7ccc18b2d8cec1dde8a6540f1d9.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 503, + 268, + 525 + ], + "lines": [ + { + "bbox": [ + 105, + 503, + 268, + 525 + ], + "spans": [ + { + "bbox": [ + 105, + 503, + 268, + 525 + ], + "type": "text", + "content": "Figure 41: Performance of LLaVA-OV models." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 106, + 540, + 267, + 696 + ], + "blocks": [ + { + "bbox": [ + 106, + 540, + 267, + 696 + ], + "lines": [ + { + "bbox": [ + 106, + 540, + 267, + 696 + ], + "spans": [ + { + "bbox": [ + 106, + 540, + 267, + 696 + ], + "type": "image", + "image_path": "55139a24a0398f1a50635bb011eea4dd2d4f541f80a6f0a5595eb6a8d1ed4fa4.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 700, + 267, + 723 + ], + "lines": [ + { + "bbox": [ + 105, + 700, + 267, + 723 + ], + "spans": [ + { + "bbox": [ + 105, + 700, + 267, + 723 + ], + "type": "text", + "content": "Figure 43: Performance of Cambrian models." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 343, + 348, + 504, + 499 + ], + "blocks": [ + { + "bbox": [ + 343, + 348, + 504, + 499 + ], + "lines": [ + { + "bbox": [ + 343, + 348, + 504, + 499 + ], + "spans": [ + { + "bbox": [ + 343, + 348, + 504, + 499 + ], + "type": "image", + "image_path": "aef346c945483778332310a8f57554bf20287e4e50626ad755cbc0fbd4d16ef1.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 342, + 503, + 506, + 525 + ], + "lines": [ + { + "bbox": [ + 342, + 503, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 342, + 503, + 506, + 525 + ], + "type": "text", + "content": "Figure 42: Performance of LLaVA-NEXT models." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 343, + 540, + 504, + 696 + ], + "blocks": [ + { + "bbox": [ + 343, + 540, + 504, + 696 + ], + "lines": [ + { + "bbox": [ + 343, + 540, + 504, + 696 + ], + "spans": [ + { + "bbox": [ + 343, + 540, + 504, + 696 + ], + "type": "image", + "image_path": "d4f47a3cfea74dbcdba6be6cae5c3de1604c855186200d533b4feaf81cebecaa.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 342, + 700, + 506, + 723 + ], + "lines": [ + { + "bbox": [ + 342, + 700, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 342, + 700, + 506, + 723 + ], + "type": "text", + "content": "Figure 44: Performance of Eagle models." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 68, + 267, + 224 + ], + "blocks": [ + { + "bbox": [ + 106, + 68, + 267, + 224 + ], + "lines": [ + { + "bbox": [ + 106, + 68, + 267, + 224 + ], + "spans": [ + { + "bbox": [ + 106, + 68, + 267, + 224 + ], + "type": "image", + "image_path": "16670a54267741e9ab1d271281b1679ab4efd87b62f63112618b4dd4ea1d0cb4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 229, + 267, + 251 + ], + "lines": [ + { + "bbox": [ + 105, + 229, + 267, + 251 + ], + "spans": [ + { + "bbox": [ + 105, + 229, + 267, + 251 + ], + "type": "text", + "content": "Figure 45: Performance of InternVL2 models." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 106, + 256, + 267, + 413 + ], + "blocks": [ + { + "bbox": [ + 106, + 256, + 267, + 413 + ], + "lines": [ + { + "bbox": [ + 106, + 256, + 267, + 413 + ], + "spans": [ + { + "bbox": [ + 106, + 256, + 267, + 413 + ], + "type": "image", + "image_path": "de976e631cf087e9b98fcbfebdd631aec38341bb046ccdaefd2e46c2c21360a0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 417, + 266, + 439 + ], + "lines": [ + { + "bbox": [ + 105, + 417, + 266, + 439 + ], + "spans": [ + { + "bbox": [ + 105, + 417, + 266, + 439 + ], + "type": "text", + "content": "Figure 47: Performance of Qwen2.5 models." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 343, + 69, + 504, + 224 + ], + "blocks": [ + { + "bbox": [ + 343, + 69, + 504, + 224 + ], + "lines": [ + { + "bbox": [ + 343, + 69, + 504, + 224 + ], + "spans": [ + { + "bbox": [ + 343, + 69, + 504, + 224 + ], + "type": "image", + "image_path": "5a5024a6c0db75938d1896d978255bbae4667cfb4e6b4ed5c29aec27e99ba6f2.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 342, + 229, + 506, + 251 + ], + "lines": [ + { + "bbox": [ + 342, + 229, + 506, + 251 + ], + "spans": [ + { + "bbox": [ + 342, + 229, + 506, + 251 + ], + "type": "text", + "content": "Figure 46: Performance of InternVL2.5 models." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "29" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 28 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 71, + 208, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 71, + 208, + 85 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 208, + 85 + ], + "type": "text", + "content": "M Samples Cases" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 101, + 194, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 101, + 194, + 113 + ], + "spans": [ + { + "bbox": [ + 105, + 101, + 194, + 113 + ], + "type": "text", + "content": "M.1 Effect of CoT" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 125, + 506, + 173 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 125, + 506, + 173 + ], + "spans": [ + { + "bbox": [ + 104, + 125, + 506, + 173 + ], + "type": "text", + "content": "In this section, we present cases that the answers are influenced by adding reasoning steps for each task. For most of the tasks in COLORBENCH, adding reasoning steps can significantly improve the model performances. The samples cases of Perception and Reasoning categories are shown in Figure 48 to Figure 57. Case for Robustness category is shown in Figure 58." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 162, + 188, + 242, + 200 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 188, + 242, + 200 + ], + "spans": [ + { + "bbox": [ + 162, + 188, + 242, + 200 + ], + "type": "text", + "content": "Color Recognition" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 118, + 209, + 173, + 251 + ], + "blocks": [ + { + "bbox": [ + 118, + 209, + 173, + 251 + ], + "lines": [ + { + "bbox": [ + 118, + 209, + 173, + 251 + ], + "spans": [ + { + "bbox": [ + 118, + 209, + 173, + 251 + ], + "type": "image", + "image_path": "b0442098f58804ee226a7f7ba18702f450572f8c433ea41eb00f0a4f129914d1.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 370, + 301, + 392 + ], + "lines": [ + { + "bbox": [ + 105, + 370, + 301, + 392 + ], + "spans": [ + { + "bbox": [ + 105, + 370, + 301, + 392 + ], + "type": "text", + "content": "Figure 48: Case with CoT for Color Recognition task." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 181, + 209, + 290, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 209, + 290, + 217 + ], + "spans": [ + { + "bbox": [ + 181, + 209, + 290, + 217 + ], + "type": "text", + "content": "What color does not exist in this image?" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 182, + 219, + 256, + 225 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 219, + 256, + 225 + ], + "spans": [ + { + "bbox": [ + 182, + 219, + 256, + 225 + ], + "type": "text", + "content": "A:White B:Purple C:Yellow" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 182, + 229, + 240, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 229, + 240, + 236 + ], + "spans": [ + { + "bbox": [ + 182, + 229, + 240, + 236 + ], + "type": "text", + "content": "D:Red E:No answer" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 182, + 239, + 201, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 239, + 201, + 245 + ], + "spans": [ + { + "bbox": [ + 182, + 239, + 201, + 245 + ], + "type": "text", + "content": "Ans: E" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 175, + 255, + 229, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 255, + 229, + 262 + ], + "spans": [ + { + "bbox": [ + 175, + 255, + 229, + 262 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 187, + 270, + 216, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 270, + 216, + 277 + ], + "spans": [ + { + "bbox": [ + 187, + 270, + 216, + 277 + ], + "type": "text", + "content": "GPT-40:A" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 177, + 285, + 227, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 285, + 227, + 292 + ], + "spans": [ + { + "bbox": [ + 177, + 285, + 227, + 292 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 111, + 300, + 131, + 306 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 300, + 131, + 306 + ], + "spans": [ + { + "bbox": [ + 111, + 300, + 131, + 306 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 111, + 307, + 285, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 307, + 285, + 319 + ], + "spans": [ + { + "bbox": [ + 111, + 307, + 285, + 319 + ], + "type": "text", + "content": "Let's analyze the image step by step to determine which color does not exist:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 111, + 319, + 290, + 359 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 111, + 319, + 229, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 319, + 229, + 327 + ], + "spans": [ + { + "bbox": [ + 111, + 319, + 229, + 327 + ], + "type": "text", + "content": "1. White - The background of the image is white." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 111, + 327, + 223, + 333 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 327, + 223, + 333 + ], + "spans": [ + { + "bbox": [ + 111, + 327, + 223, + 333 + ], + "type": "text", + "content": "2. Purple - The first figure on the left is purple." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 111, + 333, + 238, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 333, + 238, + 339 + ], + "spans": [ + { + "bbox": [ + 111, + 333, + 238, + 339 + ], + "type": "text", + "content": "3. Yellow - The figure near the front is yellow." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 111, + 340, + 234, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 340, + 234, + 346 + ], + "spans": [ + { + "bbox": [ + 111, + 340, + 234, + 346 + ], + "type": "text", + "content": "4. Red - The figure in the front right position is red." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 111, + 346, + 290, + 359 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 346, + 290, + 359 + ], + "spans": [ + { + "bbox": [ + 111, + 346, + 290, + 359 + ], + "type": "text", + "content": "All the colors listed appear in the image. Therefore, the answer is: (E) No answer" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 371, + 188, + 445, + 198 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 188, + 445, + 198 + ], + "spans": [ + { + "bbox": [ + 371, + 188, + 445, + 198 + ], + "type": "text", + "content": "Color Extraction" + } + ] + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 326, + 203, + 373, + 253 + ], + "blocks": [ + { + "bbox": [ + 326, + 203, + 373, + 253 + ], + "lines": [ + { + "bbox": [ + 326, + 203, + 373, + 253 + ], + "spans": [ + { + "bbox": [ + 326, + 203, + 373, + 253 + ], + "type": "image", + "image_path": "8192af9e15181e04ba5197f2d80fe008b70cfb88034d5496af2db6433271d90d.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "bbox": [ + 388, + 209, + 495, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 209, + 495, + 227 + ], + "spans": [ + { + "bbox": [ + 388, + 209, + 495, + 227 + ], + "type": "text", + "content": "What is the HEX value of the given color in the image?" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 388, + 228, + 452, + 246 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 388, + 228, + 452, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 228, + 452, + 237 + ], + "spans": [ + { + "bbox": [ + 388, + 228, + 452, + 237 + ], + "type": "text", + "content": "A: #AD2B86 B: #AD3087" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 388, + 238, + 451, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 238, + 451, + 246 + ], + "spans": [ + { + "bbox": [ + 388, + 238, + 451, + 246 + ], + "type": "text", + "content": "C:#AD2D8B D:#050405" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 476, + 238, + 495, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 238, + 495, + 245 + ], + "spans": [ + { + "bbox": [ + 476, + 238, + 495, + 245 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 381, + 255, + 435, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 255, + 435, + 262 + ], + "spans": [ + { + "bbox": [ + 381, + 255, + 435, + 262 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 394, + 270, + 422, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 270, + 422, + 277 + ], + "spans": [ + { + "bbox": [ + 394, + 270, + 422, + 277 + ], + "type": "text", + "content": "GPT-40: B" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 383, + 285, + 432, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 285, + 432, + 292 + ], + "spans": [ + { + "bbox": [ + 383, + 285, + 432, + 292 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 317, + 300, + 337, + 306 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 300, + 337, + 306 + ], + "spans": [ + { + "bbox": [ + 317, + 300, + 337, + 306 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 317, + 307, + 495, + 370 + ], + "type": "list", + "angle": 0, + "index": 41, + "blocks": [ + { + "bbox": [ + 317, + 307, + 495, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 307, + 495, + 320 + ], + "spans": [ + { + "bbox": [ + 317, + 307, + 495, + 320 + ], + "type": "text", + "content": "To determine the HEX value of the given color, I'll analyze the image and compare it to the provided options:" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 317, + 320, + 457, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 320, + 457, + 327 + ], + "spans": [ + { + "bbox": [ + 317, + 320, + 457, + 327 + ], + "type": "text", + "content": "1. The color appears to be a shade of magenta or purple." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 317, + 327, + 415, + 333 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 327, + 415, + 333 + ], + "spans": [ + { + "bbox": [ + 317, + 327, + 415, + 333 + ], + "type": "text", + "content": "2. I will now compare it with the options:" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 317, + 333, + 353, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 333, + 353, + 338 + ], + "spans": [ + { + "bbox": [ + 317, + 333, + 353, + 338 + ], + "type": "text", + "content": "-A#AD2B86" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 317, + 339, + 354, + 345 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 339, + 354, + 345 + ], + "spans": [ + { + "bbox": [ + 317, + 339, + 354, + 345 + ], + "type": "text", + "content": "-(B)#AD3087" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 317, + 345, + 354, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 345, + 354, + 350 + ], + "spans": [ + { + "bbox": [ + 317, + 345, + 354, + 350 + ], + "type": "text", + "content": "(C)#AD2D8B " + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 317, + 350, + 352, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 350, + 352, + 357 + ], + "spans": [ + { + "bbox": [ + 317, + 350, + 352, + 357 + ], + "type": "text", + "content": "(D) #050405" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 317, + 357, + 415, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 357, + 415, + 364 + ], + "spans": [ + { + "bbox": [ + 317, + 357, + 415, + 364 + ], + "type": "text", + "content": "The color matches option (A) #AD2B86" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 317, + 364, + 405, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 364, + 405, + 370 + ], + "spans": [ + { + "bbox": [ + 317, + 364, + 405, + 370 + ], + "type": "text", + "content": "Therefore, the correct answer is (A)." + } + ] + } + ], + "index": 40 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 160, + 435, + 244, + 447 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 435, + 244, + 447 + ], + "spans": [ + { + "bbox": [ + 160, + 435, + 244, + 447 + ], + "type": "text", + "content": "Object Recognition" + } + ] + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 119, + 451, + 167, + 499 + ], + "blocks": [ + { + "bbox": [ + 119, + 451, + 167, + 499 + ], + "lines": [ + { + "bbox": [ + 119, + 451, + 167, + 499 + ], + "spans": [ + { + "bbox": [ + 119, + 451, + 167, + 499 + ], + "type": "image", + "image_path": "598ea378274d0f35eee2414513c0a6c3c6ea1f6afb599e519166d9d44be6d90a.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 609, + 299, + 632 + ], + "lines": [ + { + "bbox": [ + 105, + 609, + 299, + 632 + ], + "spans": [ + { + "bbox": [ + 105, + 609, + 299, + 632 + ], + "type": "text", + "content": "Figure 50: Case with CoT for Object Recognition task." + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_caption" + } + ], + "index": 44 + }, + { + "bbox": [ + 182, + 456, + 280, + 463 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 456, + 280, + 463 + ], + "spans": [ + { + "bbox": [ + 182, + 456, + 280, + 463 + ], + "type": "text", + "content": "Which object has a color of yellow in" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 182, + 466, + 214, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 466, + 214, + 473 + ], + "spans": [ + { + "bbox": [ + 182, + 466, + 214, + 473 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 182, + 475, + 251, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 475, + 251, + 483 + ], + "spans": [ + { + "bbox": [ + 182, + 475, + 251, + 483 + ], + "type": "text", + "content": "A:Apple B:Snowman" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 182, + 485, + 250, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 485, + 250, + 492 + ], + "spans": [ + { + "bbox": [ + 182, + 485, + 250, + 492 + ], + "type": "text", + "content": "C: Cucumber D: Umbrella" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 269, + 486, + 289, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 269, + 486, + 289, + 492 + ], + "spans": [ + { + "bbox": [ + 269, + 486, + 289, + 492 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 175, + 502, + 229, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 502, + 229, + 509 + ], + "spans": [ + { + "bbox": [ + 175, + 502, + 229, + 509 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 188, + 517, + 216, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 517, + 216, + 524 + ], + "spans": [ + { + "bbox": [ + 188, + 517, + 216, + 524 + ], + "type": "text", + "content": "Gemini: D" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 177, + 532, + 227, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 532, + 227, + 538 + ], + "spans": [ + { + "bbox": [ + 177, + 532, + 227, + 538 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 111, + 547, + 132, + 552 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 547, + 132, + 552 + ], + "spans": [ + { + "bbox": [ + 111, + 547, + 132, + 552 + ], + "type": "text", + "content": "Gemini:" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 111, + 553, + 170, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 553, + 170, + 559 + ], + "spans": [ + { + "bbox": [ + 111, + 553, + 170, + 559 + ], + "type": "text", + "content": "Let's analyze the image." + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 111, + 560, + 259, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 560, + 259, + 567 + ], + "spans": [ + { + "bbox": [ + 111, + 560, + 259, + 567 + ], + "type": "text", + "content": "(A) The apple has multiple colors like red, green, and yellow." + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 111, + 567, + 176, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 567, + 176, + 573 + ], + "spans": [ + { + "bbox": [ + 111, + 567, + 176, + 573 + ], + "type": "text", + "content": "(B) The snowman is white." + } + ] + } + ], + "index": 56 + }, + { + "bbox": [ + 111, + 573, + 176, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 573, + 176, + 578 + ], + "spans": [ + { + "bbox": [ + 111, + 573, + 176, + 578 + ], + "type": "text", + "content": "(C) The cucumber is green." + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 111, + 579, + 201, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 579, + 201, + 586 + ], + "spans": [ + { + "bbox": [ + 111, + 579, + 201, + 586 + ], + "type": "text", + "content": "(D) The umbrella has multiple colors." + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 111, + 586, + 277, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 586, + 277, + 592 + ], + "spans": [ + { + "bbox": [ + 111, + 586, + 277, + 592 + ], + "type": "text", + "content": "Therefore, the correct option is apple, since it can have yellow color." + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 111, + 593, + 119, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 593, + 119, + 599 + ], + "spans": [ + { + "bbox": [ + 111, + 593, + 119, + 599 + ], + "type": "text", + "content": "(A)" + } + ] + } + ], + "index": 60 + }, + { + "bbox": [ + 370, + 436, + 445, + 447 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 436, + 445, + 447 + ], + "spans": [ + { + "bbox": [ + 370, + 436, + 445, + 447 + ], + "type": "text", + "content": "Color Proportion" + } + ] + } + ], + "index": 62 + }, + { + "type": "image", + "bbox": [ + 318, + 452, + 341, + 475 + ], + "blocks": [ + { + "bbox": [ + 310, + 382, + 505, + 415 + ], + "lines": [ + { + "bbox": [ + 310, + 382, + 505, + 415 + ], + "spans": [ + { + "bbox": [ + 310, + 382, + 505, + 415 + ], + "type": "text", + "content": "Figure 49: Case with CoT for Color Extraction task. Option backgrounds correspond to their color codes." + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 318, + 452, + 341, + 475 + ], + "lines": [ + { + "bbox": [ + 318, + 452, + 341, + 475 + ], + "spans": [ + { + "bbox": [ + 318, + 452, + 341, + 475 + ], + "type": "image", + "image_path": "2e375caac04c7901ff50997c42a0cd1dd1778986aa8a9a21e1b4d410923a35d9.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + } + ], + "index": 63 + }, + { + "type": "image", + "bbox": [ + 345, + 452, + 383, + 476 + ], + "blocks": [ + { + "bbox": [ + 345, + 452, + 383, + 476 + ], + "lines": [ + { + "bbox": [ + 345, + 452, + 383, + 476 + ], + "spans": [ + { + "bbox": [ + 345, + 452, + 383, + 476 + ], + "type": "image", + "image_path": "f6adbdd4e43b49dcc7349a16ff5fe996e8ccd0d596878b5fd99f8e3e39b2175d.jpg" + } + ] + } + ], + "index": 64, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 701, + 506, + 723 + ], + "lines": [ + { + "bbox": [ + 310, + 701, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 310, + 701, + 506, + 723 + ], + "type": "text", + "content": "Figure 51: Case with CoT for Color Proportion task." + } + ] + } + ], + "index": 86, + "angle": 0, + "type": "image_caption" + } + ], + "index": 64 + }, + { + "bbox": [ + 388, + 456, + 492, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 456, + 492, + 473 + ], + "spans": [ + { + "bbox": [ + 388, + 456, + 492, + 473 + ], + "type": "text", + "content": "What is closest to the proportion of the color white in the flag?" + } + ] + } + ], + "index": 65 + }, + { + "bbox": [ + 388, + 475, + 437, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 475, + 437, + 483 + ], + "spans": [ + { + "bbox": [ + 388, + 475, + 437, + 483 + ], + "type": "text", + "content": "A:5% B:10%" + } + ] + } + ], + "index": 66 + }, + { + "bbox": [ + 388, + 485, + 438, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 485, + 438, + 492 + ], + "spans": [ + { + "bbox": [ + 388, + 485, + 438, + 492 + ], + "type": "text", + "content": "C:15% D:20%" + } + ] + } + ], + "index": 67 + }, + { + "bbox": [ + 476, + 486, + 495, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 486, + 495, + 492 + ], + "spans": [ + { + "bbox": [ + 476, + 486, + 495, + 492 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 68 + }, + { + "bbox": [ + 381, + 502, + 435, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 502, + 435, + 509 + ], + "spans": [ + { + "bbox": [ + 381, + 502, + 435, + 509 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 69 + }, + { + "bbox": [ + 394, + 517, + 421, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 517, + 421, + 523 + ], + "spans": [ + { + "bbox": [ + 394, + 517, + 421, + 523 + ], + "type": "text", + "content": "GPT40:C" + } + ] + } + ], + "index": 70 + }, + { + "bbox": [ + 383, + 532, + 432, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 532, + 432, + 538 + ], + "spans": [ + { + "bbox": [ + 383, + 532, + 432, + 538 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 71 + }, + { + "bbox": [ + 317, + 547, + 337, + 554 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 547, + 337, + 554 + ], + "spans": [ + { + "bbox": [ + 317, + 547, + 337, + 554 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 72 + }, + { + "bbox": [ + 317, + 554, + 498, + 693 + ], + "type": "list", + "angle": 0, + "index": 85, + "blocks": [ + { + "bbox": [ + 317, + 554, + 440, + 560 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 554, + 440, + 560 + ], + "spans": [ + { + "bbox": [ + 317, + 554, + 440, + 560 + ], + "type": "text", + "content": "To find the proportion of the color white in the flag:" + } + ] + } + ], + "index": 73 + }, + { + "bbox": [ + 317, + 560, + 490, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 560, + 490, + 567 + ], + "spans": [ + { + "bbox": [ + 317, + 560, + 490, + 567 + ], + "type": "text", + "content": "1. Analyze the Flag's Design: The flag has a white Scandinavian cross." + } + ] + } + ], + "index": 74 + }, + { + "bbox": [ + 317, + 567, + 440, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 567, + 440, + 573 + ], + "spans": [ + { + "bbox": [ + 317, + 567, + 440, + 573 + ], + "type": "text", + "content": "This cross consists of a vertical and horizontal bar." + } + ] + } + ], + "index": 75 + }, + { + "bbox": [ + 317, + 574, + 490, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 574, + 490, + 593 + ], + "spans": [ + { + "bbox": [ + 317, + 574, + 490, + 593 + ], + "type": "text", + "content": "2. Estimate the Total Area of the Flag: Assume the flag is divided into a grid of even squares, which makes it easier to estimate the area proportions visually." + } + ] + } + ], + "index": 76 + }, + { + "bbox": [ + 317, + 594, + 490, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 594, + 490, + 620 + ], + "spans": [ + { + "bbox": [ + 317, + 594, + 490, + 620 + ], + "type": "text", + "content": "3. Estimate the Area of the White Cross: The white cross occupies approximately one-fifth in width and one-third in height of the flag when considering both the horizontal and vertical bars. Since both bars overlap in the middle, the white area is slightly less than" + } + ] + } + ], + "index": 77 + }, + { + "bbox": [ + 317, + 620, + 360, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 620, + 360, + 626 + ], + "spans": [ + { + "bbox": [ + 317, + 620, + 360, + 626 + ], + "type": "text", + "content": "a simple addition." + } + ] + } + ], + "index": 78 + }, + { + "bbox": [ + 317, + 627, + 407, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 627, + 407, + 632 + ], + "spans": [ + { + "bbox": [ + 317, + 627, + 407, + 632 + ], + "type": "text", + "content": "4. Calculate Approximate Proportion:" + } + ] + } + ], + "index": 79 + }, + { + "bbox": [ + 317, + 633, + 498, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 633, + 498, + 646 + ], + "spans": [ + { + "bbox": [ + 317, + 633, + 498, + 646 + ], + "type": "text", + "content": "- The horizontal and vertical bars together approximately cover the central columns and rows. Since they overlap, a simple" + } + ] + } + ], + "index": 80 + }, + { + "bbox": [ + 317, + 647, + 485, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 647, + 485, + 658 + ], + "spans": [ + { + "bbox": [ + 317, + 647, + 485, + 658 + ], + "type": "text", + "content": "estimation shows the white area as less than a third of the flag's total area." + } + ] + } + ], + "index": 81 + }, + { + "bbox": [ + 317, + 659, + 488, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 659, + 488, + 673 + ], + "spans": [ + { + "bbox": [ + 317, + 659, + 488, + 673 + ], + "type": "text", + "content": "- A reasonable estimation of the white area compared to the red areas suggests a proportion closer to " + }, + { + "bbox": [ + 317, + 659, + 488, + 673 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 317, + 659, + 488, + 673 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 82 + }, + { + "bbox": [ + 317, + 673, + 497, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 673, + 497, + 685 + ], + "spans": [ + { + "bbox": [ + 317, + 673, + 497, + 685 + ], + "type": "text", + "content": "5. Conclusion: Based on the above estimations, a plausible estimation for the proportion of white in the flag is closest to " + }, + { + "bbox": [ + 317, + 673, + 497, + 685 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 317, + 673, + 497, + 685 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 83 + }, + { + "bbox": [ + 317, + 686, + 400, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 686, + 400, + 693 + ], + "spans": [ + { + "bbox": [ + 317, + 686, + 400, + 693 + ], + "type": "text", + "content": "Therefore, the answer is (D) " + }, + { + "bbox": [ + 317, + 686, + 400, + 693 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 317, + 686, + 400, + 693 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 84 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 312, + 750 + ], + "type": "text", + "content": "30" + } + ] + } + ], + "index": 87 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 29 + }, + { + "para_blocks": [ + { + "bbox": [ + 161, + 86, + 242, + 98 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 86, + 242, + 98 + ], + "spans": [ + { + "bbox": [ + 161, + 86, + 242, + 98 + ], + "type": "text", + "content": "Color Comparison" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 111, + 122, + 264, + 129 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 122, + 264, + 129 + ], + "spans": [ + { + "bbox": [ + 111, + 122, + 264, + 129 + ], + "type": "text", + "content": "From left to right, which strip has the third reddest color?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 132, + 152, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 132, + 152, + 139 + ], + "spans": [ + { + "bbox": [ + 111, + 132, + 152, + 139 + ], + "type": "text", + "content": "A: The first one." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 162, + 132, + 210, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 132, + 210, + 138 + ], + "spans": [ + { + "bbox": [ + 162, + 132, + 210, + 138 + ], + "type": "text", + "content": "B: The second one." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 142, + 154, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 142, + 154, + 148 + ], + "spans": [ + { + "bbox": [ + 111, + 142, + 154, + 148 + ], + "type": "text", + "content": "C: The third one." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 162, + 142, + 208, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 142, + 208, + 148 + ], + "spans": [ + { + "bbox": [ + 162, + 142, + 208, + 148 + ], + "type": "text", + "content": "D: The fourth one" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 271, + 142, + 290, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 271, + 142, + 290, + 148 + ], + "spans": [ + { + "bbox": [ + 271, + 142, + 290, + 148 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "spans": [ + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 189, + 168, + 216, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 168, + 216, + 175 + ], + "spans": [ + { + "bbox": [ + 189, + 168, + 216, + 175 + ], + "type": "text", + "content": "GPT4o: A" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "spans": [ + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 111, + 198, + 132, + 204 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 198, + 132, + 204 + ], + "spans": [ + { + "bbox": [ + 111, + 198, + 132, + 204 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 111, + 205, + 288, + 291 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 111, + 205, + 283, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 205, + 283, + 217 + ], + "spans": [ + { + "bbox": [ + 111, + 205, + 283, + 217 + ], + "type": "text", + "content": "To determine which strip has the third reddest color, let's analyze each strip:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 111, + 218, + 235, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 218, + 235, + 224 + ], + "spans": [ + { + "bbox": [ + 111, + 218, + 235, + 224 + ], + "type": "text", + "content": "1. First strip: An orange color, which has some red." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 111, + 225, + 248, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 225, + 248, + 232 + ], + "spans": [ + { + "bbox": [ + 111, + 225, + 248, + 232 + ], + "type": "text", + "content": "2. Second strip: A cream color, which has little to no red." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 111, + 232, + 244, + 238 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 232, + 244, + 238 + ], + "spans": [ + { + "bbox": [ + 111, + 232, + 244, + 238 + ], + "type": "text", + "content": "3. Third strip: A light beige color, with a hint of red." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 111, + 239, + 235, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 239, + 235, + 244 + ], + "spans": [ + { + "bbox": [ + 111, + 239, + 235, + 244 + ], + "type": "text", + "content": "4. Fourth strip: A deep red color, with the most red." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 111, + 245, + 288, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 245, + 288, + 257 + ], + "spans": [ + { + "bbox": [ + 111, + 245, + 288, + 257 + ], + "type": "text", + "content": "Based on these observations, the order of redness from most to least is: 1. Fourth strip" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 111, + 258, + 141, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 258, + 141, + 264 + ], + "spans": [ + { + "bbox": [ + 111, + 258, + 141, + 264 + ], + "type": "text", + "content": "2. First strip" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 111, + 265, + 141, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 265, + 141, + 271 + ], + "spans": [ + { + "bbox": [ + 111, + 265, + 141, + 271 + ], + "type": "text", + "content": "3. Third strip" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 111, + 271, + 148, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 271, + 148, + 277 + ], + "spans": [ + { + "bbox": [ + 111, + 271, + 148, + 277 + ], + "type": "text", + "content": "4. Second strip" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 111, + 278, + 276, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 278, + 276, + 291 + ], + "spans": [ + { + "bbox": [ + 111, + 278, + 276, + 291 + ], + "type": "text", + "content": "Therefore, the strip with the third reddest color is the third strip. The correct option is (C)" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 166, + 342, + 238, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 166, + 342, + 238, + 354 + ], + "spans": [ + { + "bbox": [ + 166, + 342, + 238, + 354 + ], + "type": "text", + "content": "Object Counting" + } + ] + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 120, + 357, + 169, + 406 + ], + "blocks": [ + { + "bbox": [ + 105, + 300, + 299, + 320 + ], + "lines": [ + { + "bbox": [ + 105, + 300, + 299, + 320 + ], + "spans": [ + { + "bbox": [ + 105, + 300, + 299, + 320 + ], + "type": "text", + "content": "Figure 52: Case with CoT for Color Comparison task." + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 120, + 357, + 169, + 406 + ], + "lines": [ + { + "bbox": [ + 120, + 357, + 169, + 406 + ], + "spans": [ + { + "bbox": [ + 120, + 357, + 169, + 406 + ], + "type": "image", + "image_path": "cbd2930989e81297795f38a8d335c4f0e436114d40ecacf7ec8c73899c6d3fd2.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "bbox": [ + 182, + 363, + 290, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 363, + 290, + 380 + ], + "spans": [ + { + "bbox": [ + 182, + 363, + 290, + 380 + ], + "type": "text", + "content": "How many green strawberries are on the tree in this image?" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 182, + 382, + 250, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 382, + 250, + 389 + ], + "spans": [ + { + "bbox": [ + 182, + 382, + 250, + 389 + ], + "type": "text", + "content": "A:11 B:13 C:18" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 182, + 392, + 227, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 392, + 227, + 399 + ], + "spans": [ + { + "bbox": [ + 182, + 392, + 227, + 399 + ], + "type": "text", + "content": "D:12 E:17" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 271, + 392, + 290, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 271, + 392, + 290, + 399 + ], + "spans": [ + { + "bbox": [ + 271, + 392, + 290, + 399 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 175, + 408, + 229, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 408, + 229, + 415 + ], + "spans": [ + { + "bbox": [ + 175, + 408, + 229, + 415 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 188, + 423, + 215, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 423, + 215, + 430 + ], + "spans": [ + { + "bbox": [ + 188, + 423, + 215, + 430 + ], + "type": "text", + "content": "GPT4o: B" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 177, + 437, + 227, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 437, + 227, + 445 + ], + "spans": [ + { + "bbox": [ + 177, + 437, + 227, + 445 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 111, + 453, + 132, + 459 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 453, + 132, + 459 + ], + "spans": [ + { + "bbox": [ + 111, + 453, + 132, + 459 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 111, + 460, + 283, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 460, + 283, + 473 + ], + "spans": [ + { + "bbox": [ + 111, + 460, + 283, + 473 + ], + "type": "text", + "content": "To determine how many green strawberries are on the tree, let's count each visible green one." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 111, + 474, + 284, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 474, + 284, + 486 + ], + "spans": [ + { + "bbox": [ + 111, + 474, + 284, + 486 + ], + "type": "text", + "content": "1. Start from the left side of the tree and move toward the right, looking from top to bottom." + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 111, + 487, + 259, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 487, + 259, + 494 + ], + "spans": [ + { + "bbox": [ + 111, + 487, + 259, + 494 + ], + "type": "text", + "content": "2. Carefully look for strawberries that are a solid green color." + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 111, + 494, + 259, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 494, + 259, + 500 + ], + "spans": [ + { + "bbox": [ + 111, + 494, + 259, + 500 + ], + "type": "text", + "content": "By doing this, I can count the green strawberries on the tree." + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 111, + 501, + 145, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 501, + 145, + 506 + ], + "spans": [ + { + "bbox": [ + 111, + 501, + 145, + 506 + ], + "type": "text", + "content": "After counting" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 111, + 507, + 192, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 507, + 192, + 513 + ], + "spans": [ + { + "bbox": [ + 111, + 507, + 192, + 513 + ], + "type": "text", + "content": "There are 12 green strawberries." + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 111, + 514, + 181, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 514, + 181, + 520 + ], + "spans": [ + { + "bbox": [ + 111, + 514, + 181, + 520 + ], + "type": "text", + "content": "The correct choice is (D) 12." + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 374, + 86, + 442, + 98 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 374, + 86, + 442, + 98 + ], + "spans": [ + { + "bbox": [ + 374, + 86, + 442, + 98 + ], + "type": "text", + "content": "Color Counting" + } + ] + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 314, + 106, + 395, + 144 + ], + "blocks": [ + { + "bbox": [ + 314, + 106, + 395, + 144 + ], + "lines": [ + { + "bbox": [ + 314, + 106, + 395, + 144 + ], + "spans": [ + { + "bbox": [ + 314, + 106, + 395, + 144 + ], + "type": "image", + "image_path": "3f61af26b330eccf982ac1be09cc7376167c7cad9084f8470d0dbdb3fba59c06.jpg" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 311, + 279, + 504, + 300 + ], + "lines": [ + { + "bbox": [ + 311, + 279, + 504, + 300 + ], + "spans": [ + { + "bbox": [ + 311, + 279, + 504, + 300 + ], + "type": "text", + "content": "Figure 53: Case with CoT for Color Counting task." + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_caption" + } + ], + "index": 42 + }, + { + "bbox": [ + 399, + 107, + 489, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 399, + 107, + 489, + 125 + ], + "spans": [ + { + "bbox": [ + 399, + 107, + 489, + 125 + ], + "type": "text", + "content": "How many colors are there in this image?" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 399, + 126, + 436, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 399, + 126, + 436, + 133 + ], + "spans": [ + { + "bbox": [ + 399, + 126, + 436, + 133 + ], + "type": "text", + "content": "A:5 B:6" + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 399, + 137, + 436, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 399, + 137, + 436, + 144 + ], + "spans": [ + { + "bbox": [ + 399, + 137, + 436, + 144 + ], + "type": "text", + "content": "C:7 D:8" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 473, + 137, + 492, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 473, + 137, + 492, + 144 + ], + "spans": [ + { + "bbox": [ + 473, + 137, + 492, + 144 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 381, + 153, + 435, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 153, + 435, + 160 + ], + "spans": [ + { + "bbox": [ + 381, + 153, + 435, + 160 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 394, + 168, + 422, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 168, + 422, + 175 + ], + "spans": [ + { + "bbox": [ + 394, + 168, + 422, + 175 + ], + "type": "text", + "content": "GPT4o: C" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 383, + 182, + 432, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 182, + 432, + 190 + ], + "spans": [ + { + "bbox": [ + 383, + 182, + 432, + 190 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 317, + 198, + 337, + 204 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 198, + 337, + 204 + ], + "spans": [ + { + "bbox": [ + 317, + 198, + 337, + 204 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 317, + 205, + 497, + 270 + ], + "type": "list", + "angle": 0, + "index": 60, + "blocks": [ + { + "bbox": [ + 317, + 205, + 497, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 205, + 497, + 217 + ], + "spans": [ + { + "bbox": [ + 317, + 205, + 497, + 217 + ], + "type": "text", + "content": "To determine the number of colors in the image, let's analyze the different areas:" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 317, + 217, + 416, + 223 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 217, + 416, + 223 + ], + "spans": [ + { + "bbox": [ + 317, + 217, + 416, + 223 + ], + "type": "text", + "content": "1. Red - used for \"Homo sapiens\" areas." + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 317, + 224, + 444, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 224, + 444, + 231 + ], + "spans": [ + { + "bbox": [ + 317, + 224, + 444, + 231 + ], + "type": "text", + "content": "2. Yellow - used for \"Homo neanderthalensis\" areas" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 317, + 232, + 414, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 232, + 414, + 237 + ], + "spans": [ + { + "bbox": [ + 317, + 232, + 414, + 237 + ], + "type": "text", + "content": "3. Blue - used for the oceans and seas." + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 317, + 238, + 466, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 238, + 466, + 244 + ], + "spans": [ + { + "bbox": [ + 317, + 238, + 466, + 244 + ], + "type": "text", + "content": "4. White - used for land areas not highlighted by other colors." + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 317, + 245, + 407, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 245, + 407, + 251 + ], + "spans": [ + { + "bbox": [ + 317, + 245, + 407, + 251 + ], + "type": "text", + "content": "5. Black - used for text and numbers." + } + ] + } + ], + "index": 56 + }, + { + "bbox": [ + 317, + 251, + 447, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 251, + 447, + 258 + ], + "spans": [ + { + "bbox": [ + 317, + 251, + 447, + 258 + ], + "type": "text", + "content": "6. Brown - used for arrows indicating migration paths." + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 317, + 258, + 413, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 258, + 413, + 264 + ], + "spans": [ + { + "bbox": [ + 317, + 258, + 413, + 264 + ], + "type": "text", + "content": "Thus, there are six colors in the image." + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 317, + 265, + 407, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 265, + 407, + 270 + ], + "spans": [ + { + "bbox": [ + 317, + 265, + 407, + 270 + ], + "type": "text", + "content": "Therefore, the correct option is (B) 6." + } + ] + } + ], + "index": 59 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 378, + 342, + 438, + 352 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 342, + 438, + 352 + ], + "spans": [ + { + "bbox": [ + 378, + 342, + 438, + 352 + ], + "type": "text", + "content": "Color Illusion" + } + ] + } + ], + "index": 62 + }, + { + "type": "image", + "bbox": [ + 358, + 359, + 458, + 379 + ], + "blocks": [ + { + "bbox": [ + 105, + 529, + 298, + 550 + ], + "lines": [ + { + "bbox": [ + 105, + 529, + 298, + 550 + ], + "spans": [ + { + "bbox": [ + 105, + 529, + 298, + 550 + ], + "type": "text", + "content": "Figure 54: Case with CoT for Object Counting task." + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 358, + 359, + 458, + 379 + ], + "lines": [ + { + "bbox": [ + 358, + 359, + 458, + 379 + ], + "spans": [ + { + "bbox": [ + 358, + 359, + 458, + 379 + ], + "type": "image", + "image_path": "963feca4a2fa06242d36731abbf680566ae08c2ccdeaf4f5f6860b37ec40d334.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 311, + 533, + 504, + 555 + ], + "lines": [ + { + "bbox": [ + 311, + 533, + 504, + 555 + ], + "spans": [ + { + "bbox": [ + 311, + 533, + 504, + 555 + ], + "type": "text", + "content": "Figure 55: Case with CoT for Color Illusion task." + } + ] + } + ], + "index": 76, + "angle": 0, + "type": "image_caption" + } + ], + "index": 63 + }, + { + "bbox": [ + 318, + 382, + 440, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 318, + 382, + 440, + 389 + ], + "spans": [ + { + "bbox": [ + 318, + 382, + 440, + 389 + ], + "type": "text", + "content": "Does the horizontal bar have a uniform color?" + } + ] + } + ], + "index": 64 + }, + { + "bbox": [ + 317, + 392, + 406, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 392, + 406, + 399 + ], + "spans": [ + { + "bbox": [ + 317, + 392, + 406, + 399 + ], + "type": "text", + "content": "A:Hard to tell B:Yes C:No" + } + ] + } + ], + "index": 65 + }, + { + "bbox": [ + 470, + 392, + 489, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 470, + 392, + 489, + 399 + ], + "spans": [ + { + "bbox": [ + 470, + 392, + 489, + 399 + ], + "type": "text", + "content": "Ans:B" + } + ] + } + ], + "index": 66 + }, + { + "bbox": [ + 381, + 408, + 435, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 408, + 435, + 415 + ], + "spans": [ + { + "bbox": [ + 381, + 408, + 435, + 415 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 67 + }, + { + "bbox": [ + 394, + 423, + 422, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 423, + 422, + 430 + ], + "spans": [ + { + "bbox": [ + 394, + 423, + 422, + 430 + ], + "type": "text", + "content": "GPT4o: C" + } + ] + } + ], + "index": 68 + }, + { + "bbox": [ + 383, + 437, + 432, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 437, + 432, + 445 + ], + "spans": [ + { + "bbox": [ + 383, + 437, + 432, + 445 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 69 + }, + { + "bbox": [ + 317, + 453, + 337, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 453, + 337, + 460 + ], + "spans": [ + { + "bbox": [ + 317, + 453, + 337, + 460 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 70 + }, + { + "bbox": [ + 317, + 460, + 485, + 480 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 460, + 485, + 480 + ], + "spans": [ + { + "bbox": [ + 317, + 460, + 485, + 480 + ], + "type": "text", + "content": "First, observe the horizontal bar and its surroundings. Notice that the background is a gradient transitioning from dark brown on the left to light beige on the right." + } + ] + } + ], + "index": 71 + }, + { + "bbox": [ + 317, + 481, + 496, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 481, + 496, + 493 + ], + "spans": [ + { + "bbox": [ + 317, + 481, + 496, + 493 + ], + "type": "text", + "content": "Now focus on the bar itself. To determine if it has a uniform color, look for any signs of gradient or color changes within the bar." + } + ] + } + ], + "index": 72 + }, + { + "bbox": [ + 317, + 494, + 484, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 494, + 484, + 513 + ], + "spans": [ + { + "bbox": [ + 317, + 494, + 484, + 513 + ], + "type": "text", + "content": "Despite the gradient background, the bar remains a consistent color throughout its length because there are no visible changes or gradients within it." + } + ] + } + ], + "index": 73 + }, + { + "bbox": [ + 317, + 514, + 437, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 514, + 437, + 520 + ], + "spans": [ + { + "bbox": [ + 317, + 514, + 437, + 520 + ], + "type": "text", + "content": "Therefore, the horizontal bar has a uniform color." + } + ] + } + ], + "index": 74 + }, + { + "bbox": [ + 317, + 521, + 403, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 521, + 403, + 526 + ], + "spans": [ + { + "bbox": [ + 317, + 521, + 403, + 526 + ], + "type": "text", + "content": "Thus, the correct choice is (B) Yes." + } + ] + } + ], + "index": 75 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "31" + } + ] + } + ], + "index": 77 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 30 + }, + { + "para_blocks": [ + { + "bbox": [ + 170, + 86, + 234, + 98 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 170, + 86, + 234, + 98 + ], + "spans": [ + { + "bbox": [ + 170, + 86, + 234, + 98 + ], + "type": "text", + "content": "Color Mimicry" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 112, + 102, + 177, + 150 + ], + "blocks": [ + { + "bbox": [ + 112, + 102, + 177, + 150 + ], + "lines": [ + { + "bbox": [ + 112, + 102, + 177, + 150 + ], + "spans": [ + { + "bbox": [ + 112, + 102, + 177, + 150 + ], + "type": "image", + "image_path": "6e1559b0b3325fa9af146463fc449dd92cbeb6acefb6710495b155494cc70fec.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 288, + 299, + 310 + ], + "lines": [ + { + "bbox": [ + 105, + 288, + 299, + 310 + ], + "spans": [ + { + "bbox": [ + 105, + 288, + 299, + 310 + ], + "type": "text", + "content": "Figure 56: Case with CoT for Color Mimicry task." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 181, + 106, + 272, + 115 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 106, + 272, + 115 + ], + "spans": [ + { + "bbox": [ + 181, + 106, + 272, + 115 + ], + "type": "text", + "content": "How many leaves in this images?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 181, + 117, + 193, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 117, + 193, + 124 + ], + "spans": [ + { + "bbox": [ + 181, + 117, + 193, + 124 + ], + "type": "text", + "content": "A:1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 208, + 117, + 219, + 123 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 208, + 117, + 219, + 123 + ], + "spans": [ + { + "bbox": [ + 208, + 117, + 219, + 123 + ], + "type": "text", + "content": "B:2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 208, + 127, + 219, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 208, + 127, + 219, + 133 + ], + "spans": [ + { + "bbox": [ + 208, + 127, + 219, + 133 + ], + "type": "text", + "content": "D:0" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 265, + 127, + 284, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 265, + 127, + 284, + 133 + ], + "spans": [ + { + "bbox": [ + 265, + 127, + 284, + 133 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "spans": [ + { + "bbox": [ + 175, + 153, + 229, + 160 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 189, + 168, + 215, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 168, + 215, + 175 + ], + "spans": [ + { + "bbox": [ + 189, + 168, + 215, + 175 + ], + "type": "text", + "content": "GPT4o: B" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "spans": [ + { + "bbox": [ + 177, + 182, + 227, + 190 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 111, + 198, + 132, + 203 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 198, + 132, + 203 + ], + "spans": [ + { + "bbox": [ + 111, + 198, + 132, + 203 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 111, + 205, + 291, + 251 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 111, + 205, + 230, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 205, + 230, + 211 + ], + "spans": [ + { + "bbox": [ + 111, + 205, + 230, + 211 + ], + "type": "text", + "content": "To determine the number of leaves in the image:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 111, + 212, + 291, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 212, + 291, + 224 + ], + "spans": [ + { + "bbox": [ + 111, + 212, + 291, + 224 + ], + "type": "text", + "content": "1. Examine the main elements visible, including the branches and objects attached." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 111, + 225, + 266, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 225, + 266, + 236 + ], + "spans": [ + { + "bbox": [ + 111, + 225, + 266, + 236 + ], + "type": "text", + "content": "2. Identify any elements resembling leaves in shape, texture, or placement." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 111, + 238, + 291, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 238, + 291, + 251 + ], + "spans": [ + { + "bbox": [ + 111, + 238, + 291, + 251 + ], + "type": "text", + "content": "3. Consider any object that might mimic the appearance of a leaf, such as a leaf-tailed gecko." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 111, + 251, + 262, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 251, + 262, + 264 + ], + "spans": [ + { + "bbox": [ + 111, + 251, + 262, + 264 + ], + "type": "text", + "content": "Upon close analysis, the object resembling a leaf is actually a camouflaged animal." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 111, + 265, + 241, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 265, + 241, + 270 + ], + "spans": [ + { + "bbox": [ + 111, + 265, + 241, + 270 + ], + "type": "text", + "content": "Therefore, no actual leaves are present in the image." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 111, + 271, + 176, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 271, + 176, + 277 + ], + "spans": [ + { + "bbox": [ + 111, + 271, + 176, + 277 + ], + "type": "text", + "content": "The correct option is (D) 0." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 371, + 87, + 440, + 96 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 87, + 440, + 96 + ], + "spans": [ + { + "bbox": [ + 371, + 87, + 440, + 96 + ], + "type": "text", + "content": "Color Blindness" + } + ] + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 325, + 102, + 373, + 149 + ], + "blocks": [ + { + "bbox": [ + 325, + 102, + 373, + 149 + ], + "lines": [ + { + "bbox": [ + 325, + 102, + 373, + 149 + ], + "spans": [ + { + "bbox": [ + 325, + 102, + 373, + 149 + ], + "type": "image", + "image_path": "657c764603082669fe1068bf215b9d5d46d2d1d672027da6fc37d7920ddb9129.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 309, + 281, + 502, + 302 + ], + "lines": [ + { + "bbox": [ + 309, + 281, + 502, + 302 + ], + "spans": [ + { + "bbox": [ + 309, + 281, + 502, + 302 + ], + "type": "text", + "content": "Figure 57: Case with CoT for Color Blindness task." + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "bbox": [ + 384, + 106, + 491, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 384, + 106, + 491, + 125 + ], + "spans": [ + { + "bbox": [ + 384, + 106, + 491, + 125 + ], + "type": "text", + "content": "What is the number in the center of this image?" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 385, + 127, + 414, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 385, + 127, + 414, + 134 + ], + "spans": [ + { + "bbox": [ + 385, + 127, + 414, + 134 + ], + "type": "text", + "content": "A:7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 385, + 137, + 414, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 385, + 137, + 414, + 144 + ], + "spans": [ + { + "bbox": [ + 385, + 137, + 414, + 144 + ], + "type": "text", + "content": "C:21" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 473, + 137, + 492, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 473, + 137, + 492, + 143 + ], + "spans": [ + { + "bbox": [ + 473, + 137, + 492, + 143 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 378, + 152, + 432, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 152, + 432, + 160 + ], + "spans": [ + { + "bbox": [ + 378, + 152, + 432, + 160 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 392, + 168, + 419, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 392, + 168, + 419, + 175 + ], + "spans": [ + { + "bbox": [ + 392, + 168, + 419, + 175 + ], + "type": "text", + "content": "GPT4o: C" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 381, + 182, + 430, + 190 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 182, + 430, + 190 + ], + "spans": [ + { + "bbox": [ + 381, + 182, + 430, + 190 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 314, + 198, + 335, + 203 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 198, + 335, + 203 + ], + "spans": [ + { + "bbox": [ + 314, + 198, + 335, + 203 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 314, + 205, + 495, + 251 + ], + "type": "list", + "angle": 0, + "index": 33, + "blocks": [ + { + "bbox": [ + 314, + 205, + 489, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 205, + 489, + 211 + ], + "spans": [ + { + "bbox": [ + 314, + 205, + 489, + 211 + ], + "type": "text", + "content": "To determine the number in the center of the image, follow these steps:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 314, + 212, + 493, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 212, + 493, + 232 + ], + "spans": [ + { + "bbox": [ + 314, + 212, + 493, + 232 + ], + "type": "text", + "content": "1. Focus on contrasting colors: The image contains a pattern of numbers made up of dots in different colors. Identify the colors forming the number." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 314, + 232, + 495, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 232, + 495, + 251 + ], + "spans": [ + { + "bbox": [ + 314, + 232, + 495, + 251 + ], + "type": "text", + "content": "2. Recognize patterns: Pay attention to the shape formed by these colors. 3. Identify the number: Once the shape is recognized, compare it with the given options." + } + ] + } + ], + "index": 32 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 251, + 491, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 251, + 491, + 264 + ], + "spans": [ + { + "bbox": [ + 314, + 251, + 491, + 264 + ], + "type": "text", + "content": "The number in the center of the image is 57. Thus, the correct choice is: (D) 57" + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "32" + } + ] + } + ], + "index": 36 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 31 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 145, + 504, + 628 + ], + "blocks": [ + { + "bbox": [ + 106, + 145, + 504, + 628 + ], + "lines": [ + { + "bbox": [ + 106, + 145, + 504, + 628 + ], + "spans": [ + { + "bbox": [ + 106, + 145, + 504, + 628 + ], + "type": "image", + "image_path": "1200ee9138a6cbc43d65fd5bb6037105815745cefc5b5761a33c48a3971d4a92.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 192, + 633, + 417, + 645 + ], + "lines": [ + { + "bbox": [ + 192, + 633, + 417, + 645 + ], + "spans": [ + { + "bbox": [ + 192, + 633, + 417, + 645 + ], + "type": "text", + "content": "Figure 58: Case with CoT for Color Robustness task." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "33" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 32 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 506, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 506, + 139 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 506, + 139 + ], + "type": "text", + "content": "However, for Color Recognition and Object Recognition tasks, the improvement of involving slow thinking is limited, as these two tasks heavily rely on the accurate cognition of the vision encoder. The sample cases are shown in Figure 59 and 60. For Color Illusion task, adding reasoning steps causes the model to focus more on the misleading environment and the relationship between the environment and the foreground objects. This thinking negatively influences the model performance. A sample case is shown by Figure 61." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 162, + 156, + 241, + 168 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 156, + 241, + 168 + ], + "spans": [ + { + "bbox": [ + 162, + 156, + 241, + 168 + ], + "type": "text", + "content": "Color Recognition" + } + ] + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 112, + 173, + 176, + 221 + ], + "blocks": [ + { + "bbox": [ + 112, + 173, + 176, + 221 + ], + "lines": [ + { + "bbox": [ + 112, + 173, + 176, + 221 + ], + "spans": [ + { + "bbox": [ + 112, + 173, + 176, + 221 + ], + "type": "image", + "image_path": "f82a73987f92a766f8af284abe9be0ba82c2f30906bed00a890f765446a89b52.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 181, + 178, + 287, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 178, + 287, + 195 + ], + "spans": [ + { + "bbox": [ + 181, + 178, + 287, + 195 + ], + "type": "text", + "content": "Which color does not exist on the trunk of the tree in this image?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 181, + 198, + 238, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 198, + 238, + 205 + ], + "spans": [ + { + "bbox": [ + 181, + 198, + 238, + 205 + ], + "type": "text", + "content": "A: Purple B: Green" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 182, + 208, + 205, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 182, + 208, + 205, + 213 + ], + "spans": [ + { + "bbox": [ + 182, + 208, + 205, + 213 + ], + "type": "text", + "content": "C:White" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 214, + 208, + 238, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 208, + 238, + 213 + ], + "spans": [ + { + "bbox": [ + 214, + 208, + 238, + 213 + ], + "type": "text", + "content": "D:Yellow" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 270, + 208, + 290, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 208, + 290, + 214 + ], + "spans": [ + { + "bbox": [ + 270, + 208, + 290, + 214 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 175, + 224, + 229, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 224, + 229, + 231 + ], + "spans": [ + { + "bbox": [ + 175, + 224, + 229, + 231 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 187, + 239, + 216, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 239, + 216, + 246 + ], + "spans": [ + { + "bbox": [ + 187, + 239, + 216, + 246 + ], + "type": "text", + "content": "GPT-40: B" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 177, + 254, + 227, + 261 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 254, + 227, + 261 + ], + "spans": [ + { + "bbox": [ + 177, + 254, + 227, + 261 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 111, + 269, + 132, + 274 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 269, + 132, + 274 + ], + "spans": [ + { + "bbox": [ + 111, + 269, + 132, + 274 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 110, + 276, + 217, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 276, + 217, + 282 + ], + "spans": [ + { + "bbox": [ + 110, + 276, + 217, + 282 + ], + "type": "text", + "content": "Let's examine the colors on the tree trunks:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 111, + 283, + 196, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 283, + 196, + 289 + ], + "spans": [ + { + "bbox": [ + 111, + 283, + 196, + 289 + ], + "type": "text", + "content": "- Purple: Visible on the tree trunks." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 111, + 289, + 195, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 289, + 195, + 295 + ], + "spans": [ + { + "bbox": [ + 111, + 289, + 195, + 295 + ], + "type": "text", + "content": "- Green: Visible on the tree trunks." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 111, + 296, + 197, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 296, + 197, + 301 + ], + "spans": [ + { + "bbox": [ + 111, + 296, + 197, + 301 + ], + "type": "text", + "content": "- White: Visible on the tree trunks." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 111, + 302, + 195, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 302, + 195, + 308 + ], + "spans": [ + { + "bbox": [ + 111, + 302, + 195, + 308 + ], + "type": "text", + "content": "- Yellow: Visible on the tree trunks." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 111, + 309, + 282, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 309, + 282, + 328 + ], + "spans": [ + { + "bbox": [ + 111, + 309, + 282, + 328 + ], + "type": "text", + "content": "Since all the colors are present, there seems to be an error. Based on visible presence, each color mentioned is present. If missing one is necessary, no color listed fulfills that distinction." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 111, + 329, + 282, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 329, + 282, + 355 + ], + "spans": [ + { + "bbox": [ + 111, + 329, + 282, + 355 + ], + "type": "text", + "content": "Thus, upon reevaluation, there is no correct answer because all listed colors exist on the trunks. However, for choice completion and oversight check, let's assume possibility and choose (C) White incorrectly, for inclusion error." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 172, + 406, + 232, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 172, + 406, + 232, + 415 + ], + "spans": [ + { + "bbox": [ + 172, + 406, + 232, + 415 + ], + "type": "text", + "content": "Color Illusion" + } + ] + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 156, + 422, + 248, + 441 + ], + "blocks": [ + { + "bbox": [ + 105, + 363, + 299, + 385 + ], + "lines": [ + { + "bbox": [ + 105, + 363, + 299, + 385 + ], + "spans": [ + { + "bbox": [ + 105, + 363, + 299, + 385 + ], + "type": "text", + "content": "Figure 59: Case that CoT results in an incorrect answer for Color Recognition task." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 156, + 422, + 248, + 441 + ], + "lines": [ + { + "bbox": [ + 156, + 422, + 248, + 441 + ], + "spans": [ + { + "bbox": [ + 156, + 422, + 248, + 441 + ], + "type": "image", + "image_path": "3b86b13e603fe8346a2f5ec87915d1f2fa8e206d55063072c6e398602d91aa03.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 578, + 299, + 600 + ], + "lines": [ + { + "bbox": [ + 105, + 578, + 299, + 600 + ], + "spans": [ + { + "bbox": [ + 105, + 578, + 299, + 600 + ], + "type": "text", + "content": "Figure 61: Case that CoT results in an incorrect answer for Color Illusion task." + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "bbox": [ + 111, + 442, + 209, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 442, + 209, + 449 + ], + "spans": [ + { + "bbox": [ + 111, + 442, + 209, + 449 + ], + "type": "text", + "content": "Which circles has the darkest color?" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 111, + 452, + 252, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 452, + 252, + 459 + ], + "spans": [ + { + "bbox": [ + 111, + 452, + 252, + 459 + ], + "type": "text", + "content": "The circles are numbered left to right starting from 1." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 111, + 462, + 150, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 462, + 150, + 468 + ], + "spans": [ + { + "bbox": [ + 111, + 462, + 150, + 468 + ], + "type": "text", + "content": "A: All the same" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 162, + 462, + 171, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 462, + 171, + 468 + ], + "spans": [ + { + "bbox": [ + 162, + 462, + 171, + 468 + ], + "type": "text", + "content": "B:1" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 184, + 462, + 216, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 462, + 216, + 468 + ], + "spans": [ + { + "bbox": [ + 184, + 462, + 216, + 468 + ], + "type": "text", + "content": "C:2 D:3" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 271, + 462, + 290, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 271, + 462, + 290, + 468 + ], + "spans": [ + { + "bbox": [ + 271, + 462, + 290, + 468 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 175, + 472, + 229, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 472, + 229, + 479 + ], + "spans": [ + { + "bbox": [ + 175, + 472, + 229, + 479 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 189, + 487, + 215, + 494 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 487, + 215, + 494 + ], + "spans": [ + { + "bbox": [ + 189, + 487, + 215, + 494 + ], + "type": "text", + "content": "GPT40:D" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 177, + 502, + 227, + 509 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 502, + 227, + 509 + ], + "spans": [ + { + "bbox": [ + 177, + 502, + 227, + 509 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 111, + 517, + 132, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 517, + 132, + 522 + ], + "spans": [ + { + "bbox": [ + 111, + 517, + 132, + 522 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 111, + 524, + 287, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 524, + 287, + 543 + ], + "spans": [ + { + "bbox": [ + 111, + 524, + 287, + 543 + ], + "type": "text", + "content": "To determine which circle has the darkest color, let's examine each one: 1. Circle 1: This circle is on the left side of the gradient and appears darker." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 111, + 544, + 285, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 544, + 285, + 563 + ], + "spans": [ + { + "bbox": [ + 111, + 544, + 285, + 563 + ], + "type": "text", + "content": "2. Circle 2: This circle is in the middle and appears lighter than Circle 1. \n3. Circle 3: This circle is on the right side of the gradient and appears lighter than Circle 1." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 111, + 563, + 271, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 563, + 271, + 570 + ], + "spans": [ + { + "bbox": [ + 111, + 563, + 271, + 570 + ], + "type": "text", + "content": "Circle 1 has the darkest color. Therefore, the correct option is (B)." + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 365, + 156, + 449, + 168 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 156, + 449, + 168 + ], + "spans": [ + { + "bbox": [ + 365, + 156, + 449, + 168 + ], + "type": "text", + "content": "Object Recognition" + } + ] + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 320, + 173, + 378, + 221 + ], + "blocks": [ + { + "bbox": [ + 320, + 173, + 378, + 221 + ], + "lines": [ + { + "bbox": [ + 320, + 173, + 378, + 221 + ], + "spans": [ + { + "bbox": [ + 320, + 173, + 378, + 221 + ], + "type": "image", + "image_path": "ff99d6187976c17613409ec129ecc9a5a0daa2da9567d804333a6a093c05a78d.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 311, + 352, + 506, + 374 + ], + "lines": [ + { + "bbox": [ + 311, + 352, + 506, + 374 + ], + "spans": [ + { + "bbox": [ + 311, + 352, + 506, + 374 + ], + "type": "text", + "content": "Figure 60: Case that CoT results in an incorrect answer for Object Recognition task." + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_caption" + } + ], + "index": 37 + }, + { + "bbox": [ + 387, + 178, + 493, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 387, + 178, + 493, + 195 + ], + "spans": [ + { + "bbox": [ + 387, + 178, + 493, + 195 + ], + "type": "text", + "content": "Which country does not have a color of yellow in this image?" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 388, + 198, + 447, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 198, + 447, + 205 + ], + "spans": [ + { + "bbox": [ + 388, + 198, + 447, + 205 + ], + "type": "text", + "content": "A: Russia B: India" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 388, + 208, + 453, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 208, + 453, + 214 + ], + "spans": [ + { + "bbox": [ + 388, + 208, + 453, + 214 + ], + "type": "text", + "content": "C: Korean D: Korean" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 476, + 209, + 495, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 209, + 495, + 214 + ], + "spans": [ + { + "bbox": [ + 476, + 209, + 495, + 214 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 381, + 224, + 435, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 224, + 435, + 231 + ], + "spans": [ + { + "bbox": [ + 381, + 224, + 435, + 231 + ], + "type": "text", + "content": "Answer Before CoT:" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 395, + 239, + 421, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 395, + 239, + 421, + 246 + ], + "spans": [ + { + "bbox": [ + 395, + 239, + 421, + 246 + ], + "type": "text", + "content": "GPT4o: A" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 383, + 254, + 432, + 261 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 254, + 432, + 261 + ], + "spans": [ + { + "bbox": [ + 383, + 254, + 432, + 261 + ], + "type": "text", + "content": "Answer After CoT:" + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 317, + 269, + 337, + 274 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 269, + 337, + 274 + ], + "spans": [ + { + "bbox": [ + 317, + 269, + 337, + 274 + ], + "type": "text", + "content": "GPT40:" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 317, + 276, + 495, + 342 + ], + "type": "list", + "angle": 0, + "index": 53, + "blocks": [ + { + "bbox": [ + 317, + 276, + 495, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 276, + 495, + 289 + ], + "spans": [ + { + "bbox": [ + 317, + 276, + 495, + 289 + ], + "type": "text", + "content": "To determine which country does not have the color yellow in this image, let's go through the options:" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 317, + 289, + 418, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 289, + 418, + 295 + ], + "spans": [ + { + "bbox": [ + 317, + 289, + 418, + 295 + ], + "type": "text", + "content": "- (A) Russia: The color for Russia is blue." + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 317, + 296, + 412, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 296, + 412, + 302 + ], + "spans": [ + { + "bbox": [ + 317, + 296, + 412, + 302 + ], + "type": "text", + "content": "- (B) India: The color for India is yellow." + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 317, + 303, + 482, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 303, + 482, + 315 + ], + "spans": [ + { + "bbox": [ + 317, + 303, + 482, + 315 + ], + "type": "text", + "content": "- (C) Korean: This likely refers to North and South Korea, which are shown in orange and red respectively, not yellow." + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 317, + 316, + 456, + 321 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 316, + 456, + 321 + ], + "spans": [ + { + "bbox": [ + 317, + 316, + 456, + 321 + ], + "type": "text", + "content": "- (D) Thailand: The color for Thailand is shown as yellow." + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 317, + 322, + 494, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 322, + 494, + 335 + ], + "spans": [ + { + "bbox": [ + 317, + 322, + 494, + 335 + ], + "type": "text", + "content": "From this analysis, the country without yellow is Korea (North and South Korea)." + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 317, + 335, + 378, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 335, + 378, + 342 + ], + "spans": [ + { + "bbox": [ + 317, + 335, + 378, + 342 + ], + "type": "text", + "content": "The correct option is (C)." + } + ] + } + ], + "index": 52 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "34" + } + ] + } + ], + "index": 55 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 33 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 72, + 217, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 72, + 217, + 83 + ], + "spans": [ + { + "bbox": [ + 105, + 72, + 217, + 83 + ], + "type": "text", + "content": "M.2 Effect of Grayscale" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 162, + 505, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 162, + 505, + 184 + ], + "spans": [ + { + "bbox": [ + 104, + 162, + 505, + 184 + ], + "type": "text", + "content": "For most of the tasks in COLORBENCH, colors are critical clues for VLMs to generate the answers. We highlight these cases in Figure 62 to 69." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 189, + 504, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 189, + 504, + 212 + ], + "spans": [ + { + "bbox": [ + 104, + 189, + 504, + 212 + ], + "type": "text", + "content": "However, for Color Illusion and Color Mimicry tasks, color clues might mislead VLMs to wrong answers, as shown in Figure 70 and 71." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 106, + 224, + 297, + 349 + ], + "blocks": [ + { + "bbox": [ + 106, + 224, + 297, + 349 + ], + "lines": [ + { + "bbox": [ + 106, + 224, + 297, + 349 + ], + "spans": [ + { + "bbox": [ + 106, + 224, + 297, + 349 + ], + "type": "image", + "image_path": "9c743c06142c6b9d1488431332f38111acb4d1747df2470be78020f2ef20ebc9.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 355, + 299, + 378 + ], + "lines": [ + { + "bbox": [ + 105, + 355, + 299, + 378 + ], + "spans": [ + { + "bbox": [ + 105, + 355, + 299, + 378 + ], + "type": "text", + "content": "Figure 62: Color clues play as a critical role for Color Recognition task." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 312, + 224, + 504, + 349 + ], + "blocks": [ + { + "bbox": [ + 312, + 224, + 504, + 349 + ], + "lines": [ + { + "bbox": [ + 312, + 224, + 504, + 349 + ], + "spans": [ + { + "bbox": [ + 312, + 224, + 504, + 349 + ], + "type": "image", + "image_path": "3a32fe1f2322a6cf92e5ae779859c1d965df1d55c99ec500d0a8625524eb62ea.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 355, + 504, + 388 + ], + "lines": [ + { + "bbox": [ + 310, + 355, + 504, + 388 + ], + "spans": [ + { + "bbox": [ + 310, + 355, + 504, + 388 + ], + "type": "text", + "content": "Figure 63: Color clues play as a critical role for Color Extraction task. Option backgrounds correspond to their color codes." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 106, + 403, + 297, + 527 + ], + "blocks": [ + { + "bbox": [ + 106, + 403, + 297, + 527 + ], + "lines": [ + { + "bbox": [ + 106, + 403, + 297, + 527 + ], + "spans": [ + { + "bbox": [ + 106, + 403, + 297, + 527 + ], + "type": "image", + "image_path": "61153352f19b023b4d14179dcf4ee6c9e59f60ed4d7c8e3832d203ae8c0639ec.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 533, + 299, + 556 + ], + "lines": [ + { + "bbox": [ + 105, + 533, + 299, + 556 + ], + "spans": [ + { + "bbox": [ + 105, + 533, + 299, + 556 + ], + "type": "text", + "content": "Figure 64: Color clues play as a critical role for Object Recognition task." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 312, + 403, + 504, + 527 + ], + "blocks": [ + { + "bbox": [ + 312, + 403, + 504, + 527 + ], + "lines": [ + { + "bbox": [ + 312, + 403, + 504, + 527 + ], + "spans": [ + { + "bbox": [ + 312, + 403, + 504, + 527 + ], + "type": "image", + "image_path": "faeba91a240c6b82491c233dd9f6e49603acf5777f5096058c1032864af951c7.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 533, + 504, + 556 + ], + "lines": [ + { + "bbox": [ + 310, + 533, + 504, + 556 + ], + "spans": [ + { + "bbox": [ + 310, + 533, + 504, + 556 + ], + "type": "text", + "content": "Figure 65: Color clues play as a critical role for Color Proportion task." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 106, + 570, + 297, + 695 + ], + "blocks": [ + { + "bbox": [ + 106, + 570, + 297, + 695 + ], + "lines": [ + { + "bbox": [ + 106, + 570, + 297, + 695 + ], + "spans": [ + { + "bbox": [ + 106, + 570, + 297, + 695 + ], + "type": "image", + "image_path": "5a27a28f62a27dac85d601405edf5d26e1c56ddca2af79292e5640b1e4dbb399.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "lines": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "spans": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "type": "text", + "content": "Figure 66: Color clues play as a critical role for Color Comparison task." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 312, + 570, + 504, + 695 + ], + "blocks": [ + { + "bbox": [ + 312, + 570, + 504, + 695 + ], + "lines": [ + { + "bbox": [ + 312, + 570, + 504, + 695 + ], + "spans": [ + { + "bbox": [ + 312, + 570, + 504, + 695 + ], + "type": "image", + "image_path": "04db9be0f1fb731554f8db395000d8fe93d25dae9d5c8c28ad6adcd0c8ca50c1.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 700, + 504, + 724 + ], + "lines": [ + { + "bbox": [ + 310, + 700, + 504, + 724 + ], + "spans": [ + { + "bbox": [ + 310, + 700, + 504, + 724 + ], + "type": "text", + "content": "Figure 67: Color clues play as a critical role for Color Counting task." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "35" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 34 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 80, + 298, + 205 + ], + "blocks": [ + { + "bbox": [ + 106, + 80, + 298, + 205 + ], + "lines": [ + { + "bbox": [ + 106, + 80, + 298, + 205 + ], + "spans": [ + { + "bbox": [ + 106, + 80, + 298, + 205 + ], + "type": "image", + "image_path": "01a225e09d42842808244ce9686ef4639fe9e00aa24a3fad0cf0b21fa16569b6.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 211, + 299, + 235 + ], + "lines": [ + { + "bbox": [ + 105, + 211, + 299, + 235 + ], + "spans": [ + { + "bbox": [ + 105, + 211, + 299, + 235 + ], + "type": "text", + "content": "Figure 68: Color clues play as a critical role for Object Counting task." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 310, + 80, + 502, + 205 + ], + "blocks": [ + { + "bbox": [ + 310, + 80, + 502, + 205 + ], + "lines": [ + { + "bbox": [ + 310, + 80, + 502, + 205 + ], + "spans": [ + { + "bbox": [ + 310, + 80, + 502, + 205 + ], + "type": "image", + "image_path": "b98d4b0bdc3723411d2d559e605bd060b53ba4ceba8c6734f982f1e7256e3b79.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 308, + 211, + 503, + 233 + ], + "lines": [ + { + "bbox": [ + 308, + 211, + 503, + 233 + ], + "spans": [ + { + "bbox": [ + 308, + 211, + 503, + 233 + ], + "type": "text", + "content": "Figure 69: Color clues play as a critical role for Color Blindness task." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 106, + 240, + 299, + 365 + ], + "blocks": [ + { + "bbox": [ + 106, + 240, + 299, + 365 + ], + "lines": [ + { + "bbox": [ + 106, + 240, + 299, + 365 + ], + "spans": [ + { + "bbox": [ + 106, + 240, + 299, + 365 + ], + "type": "image", + "image_path": "4d8bbff6ab276e63816326bf550aa68316c118fc10da1b55655ddafbeb8eda52.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 371, + 299, + 393 + ], + "lines": [ + { + "bbox": [ + 105, + 371, + 299, + 393 + ], + "spans": [ + { + "bbox": [ + 105, + 371, + 299, + 393 + ], + "type": "text", + "content": "Figure 70: Color clues negatively affect VLMs prediction for Color Illusion task." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 310, + 240, + 502, + 365 + ], + "blocks": [ + { + "bbox": [ + 310, + 240, + 502, + 365 + ], + "lines": [ + { + "bbox": [ + 310, + 240, + 502, + 365 + ], + "spans": [ + { + "bbox": [ + 310, + 240, + 502, + 365 + ], + "type": "image", + "image_path": "b26eab38716da03f27ac4289e4cf416c931f938c979328864b144c9cdbe64c3e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 308, + 371, + 502, + 394 + ], + "lines": [ + { + "bbox": [ + 308, + 371, + 502, + 394 + ], + "spans": [ + { + "bbox": [ + 308, + 371, + 502, + 394 + ], + "type": "text", + "content": "Figure 71: Color clues negatively affect VLMs prediction for Color Mimicry task." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 406, + 261, + 417 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 406, + 261, + 417 + ], + "spans": [ + { + "bbox": [ + 105, + 406, + 261, + 417 + ], + "type": "text", + "content": "M.3 Failure with LLM and Vision" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 426, + 506, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 504 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 504 + ], + "type": "text", + "content": "We present a representative failure case that highlights limitations in both the vision and language components of the model. As shown in Figure 72, the model fails to correctly interpret the visual content—it misidentifies the target colors by focusing on pink and purple flowers instead of red and yellow ones, indicating a vision encoder error. Furthermore, the language model compounds this mistake by generating an incorrect chain-of-thought reasoning and arriving at an erroneous answer based on the wrong color categories. This example underscores the necessity of evaluating both visual perception and language reasoning when diagnosing failure modes in vision-language models." + } + ] + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 209, + 514, + 401, + 691 + ], + "blocks": [ + { + "bbox": [ + 209, + 514, + 401, + 691 + ], + "lines": [ + { + "bbox": [ + 209, + 514, + 401, + 691 + ], + "spans": [ + { + "bbox": [ + 209, + 514, + 401, + 691 + ], + "type": "image", + "image_path": "c6983d1170430ebae93d760bbcc9bb01ef6eaf3e9959d4a88df4dbc42bc3e639.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 125, + 696, + 484, + 709 + ], + "lines": [ + { + "bbox": [ + 125, + 696, + 484, + 709 + ], + "spans": [ + { + "bbox": [ + 125, + 696, + 484, + 709 + ], + "type": "text", + "content": "Figure 72: Case that model fails because of both vision encoder and language model." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "36" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 35 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 194, + 410, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 194, + 410, + 205 + ], + "spans": [ + { + "bbox": [ + 105, + 194, + 410, + 205 + ], + "type": "text", + "content": "We present samples cases that majority of VLMs reach the correct answers." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "spans": [ + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "type": "text", + "content": "Color Recognition" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 108, + 240, + 186, + 287 + ], + "blocks": [ + { + "bbox": [ + 108, + 240, + 186, + 287 + ], + "lines": [ + { + "bbox": [ + 108, + 240, + 186, + 287 + ], + "spans": [ + { + "bbox": [ + 108, + 240, + 186, + 287 + ], + "type": "image", + "image_path": "aeb449f380492b874d9041ad3e87a02c8e6fc2bf638b9b203399b19deba8d2e5.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 308, + 301, + 332 + ], + "lines": [ + { + "bbox": [ + 105, + 308, + 301, + 332 + ], + "spans": [ + { + "bbox": [ + 105, + 308, + 301, + 332 + ], + "type": "text", + "content": "Figure 73: Color Recognition case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 190, + 244, + 279, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 244, + 279, + 262 + ], + "spans": [ + { + "bbox": [ + 190, + 244, + 279, + 262 + ], + "type": "text", + "content": "What color does not exist in this image?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 192, + 264, + 249, + 281 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 192, + 264, + 249, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 264, + 249, + 270 + ], + "spans": [ + { + "bbox": [ + 192, + 264, + 249, + 270 + ], + "type": "text", + "content": "A:Green B:White" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 192, + 274, + 248, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 274, + 248, + 281 + ], + "spans": [ + { + "bbox": [ + 192, + 274, + 248, + 281 + ], + "type": "text", + "content": "C:Red D:Black" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 164, + 290, + 241, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 290, + 241, + 298 + ], + "spans": [ + { + "bbox": [ + 164, + 290, + 241, + 298 + ], + "type": "text", + "content": "100% (32/32) Models Correct" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 160, + 361, + 244, + 374 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 361, + 244, + 374 + ], + "spans": [ + { + "bbox": [ + 160, + 361, + 244, + 374 + ], + "type": "text", + "content": "Object Recognition" + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 107, + 377, + 145, + 402 + ], + "blocks": [ + { + "bbox": [ + 107, + 377, + 145, + 402 + ], + "lines": [ + { + "bbox": [ + 107, + 377, + 145, + 402 + ], + "spans": [ + { + "bbox": [ + 107, + 377, + 145, + 402 + ], + "type": "image", + "image_path": "08741fea1cb35f0a0057179f63b80a10f434ed0e949f16881018d51ae6911e7e.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "lines": [ + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "spans": [ + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "type": "text", + "content": "Figure 75: Object Recognition case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 146, + 377, + 186, + 403 + ], + "blocks": [ + { + "bbox": [ + 146, + 377, + 186, + 403 + ], + "lines": [ + { + "bbox": [ + 146, + 377, + 186, + 403 + ], + "spans": [ + { + "bbox": [ + 146, + 377, + 186, + 403 + ], + "type": "image", + "image_path": "fc2c39c683a70ab82616f0358b43de86e01a097eeb7cb95abedf274dd228cab8.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 188, + 383, + 286, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 383, + 286, + 399 + ], + "spans": [ + { + "bbox": [ + 188, + 383, + 286, + 399 + ], + "type": "text", + "content": "Which object has a color of green in this image?" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 189, + 403, + 247, + 419 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 189, + 403, + 242, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 403, + 242, + 411 + ], + "spans": [ + { + "bbox": [ + 189, + 403, + 242, + 411 + ], + "type": "text", + "content": "A:Flower B: Sky" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 189, + 412, + 247, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 412, + 247, + 419 + ], + "spans": [ + { + "bbox": [ + 189, + 412, + 247, + 419 + ], + "type": "text", + "content": "C:Leave D:River" + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 161, + 429, + 243, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 429, + 243, + 437 + ], + "spans": [ + { + "bbox": [ + 161, + 429, + 243, + 437 + ], + "type": "text", + "content": "93.75% (30/32) Models Correct" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 162, + 489, + 242, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 489, + 242, + 502 + ], + "spans": [ + { + "bbox": [ + 162, + 489, + 242, + 502 + ], + "type": "text", + "content": "Color Comparison" + } + ] + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 115, + 505, + 179, + 554 + ], + "blocks": [ + { + "bbox": [ + 115, + 505, + 179, + 554 + ], + "lines": [ + { + "bbox": [ + 115, + 505, + 179, + 554 + ], + "spans": [ + { + "bbox": [ + 115, + 505, + 179, + 554 + ], + "type": "image", + "image_path": "d7df1e881ec4dc7e081e6307fef0944295a543e8006267897fd257865e0e75f8.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "lines": [ + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "type": "text", + "content": "Figure 77: Color Comparison case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "bbox": [ + 188, + 510, + 293, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 510, + 293, + 517 + ], + "spans": [ + { + "bbox": [ + 188, + 510, + 293, + 517 + ], + "type": "text", + "content": "Which image is cooler in overall color?" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 189, + 520, + 230, + 537 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 189, + 520, + 227, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 520, + 227, + 526 + ], + "spans": [ + { + "bbox": [ + 189, + 520, + 227, + 526 + ], + "type": "text", + "content": "A: The left one" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 189, + 530, + 230, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 530, + 230, + 537 + ], + "spans": [ + { + "bbox": [ + 189, + 530, + 230, + 537 + ], + "type": "text", + "content": "B: The right one" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 161, + 556, + 243, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 556, + 243, + 564 + ], + "spans": [ + { + "bbox": [ + 161, + 556, + 243, + 564 + ], + "type": "text", + "content": "81.25% (26/32) Models Correct" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 169, + 616, + 235, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 169, + 616, + 235, + 628 + ], + "spans": [ + { + "bbox": [ + 169, + 616, + 235, + 628 + ], + "type": "text", + "content": "Color Mimicry" + } + ] + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 113, + 632, + 182, + 677 + ], + "blocks": [ + { + "bbox": [ + 113, + 632, + 182, + 677 + ], + "lines": [ + { + "bbox": [ + 113, + 632, + 182, + 677 + ], + "spans": [ + { + "bbox": [ + 113, + 632, + 182, + 677 + ], + "type": "image", + "image_path": "5fca07748723b74e8fb477d67b954acd0b0fc966f664d59ae978ea7576a7a2ce.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "lines": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "spans": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "type": "text", + "content": "Figure 79: Color Mimicry case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + }, + { + "bbox": [ + 188, + 636, + 276, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 636, + 276, + 645 + ], + "spans": [ + { + "bbox": [ + 188, + 636, + 276, + 645 + ], + "type": "text", + "content": "How many frogs in this images?" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 189, + 657, + 196, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 657, + 196, + 664 + ], + "spans": [ + { + "bbox": [ + 189, + 657, + 196, + 664 + ], + "type": "text", + "content": "A:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 211, + 657, + 222, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 211, + 657, + 222, + 664 + ], + "spans": [ + { + "bbox": [ + 211, + 657, + 222, + 664 + ], + "type": "text", + "content": "B:2" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 189, + 667, + 200, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 667, + 200, + 674 + ], + "spans": [ + { + "bbox": [ + 189, + 667, + 200, + 674 + ], + "type": "text", + "content": "C:3" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 211, + 667, + 223, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 211, + 667, + 223, + 674 + ], + "spans": [ + { + "bbox": [ + 211, + 667, + 223, + 674 + ], + "type": "text", + "content": "D:0" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 272, + 666, + 291, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 272, + 666, + 291, + 674 + ], + "spans": [ + { + "bbox": [ + 272, + 666, + 291, + 674 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 161, + 683, + 243, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 683, + 243, + 691 + ], + "spans": [ + { + "bbox": [ + 161, + 683, + 243, + 691 + ], + "type": "text", + "content": "93.75% (30/32) Models Correct" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "spans": [ + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "type": "text", + "content": "Color Extraction" + } + ] + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 321, + 240, + 370, + 288 + ], + "blocks": [ + { + "bbox": [ + 321, + 240, + 370, + 288 + ], + "lines": [ + { + "bbox": [ + 321, + 240, + 370, + 288 + ], + "spans": [ + { + "bbox": [ + 321, + 240, + 370, + 288 + ], + "type": "image", + "image_path": "4b9cde5658c74798ad789cd2a290fff63a01f8d9d372e55839354e0f92f0d2f9.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 308, + 506, + 343 + ], + "lines": [ + { + "bbox": [ + 310, + 308, + 506, + 343 + ], + "spans": [ + { + "bbox": [ + 310, + 308, + 506, + 343 + ], + "type": "text", + "content": "Figure 74: Color Extraction case that majority of VLMs provide correct results. Option backgrounds correspond to their color codes." + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_caption" + } + ], + "index": 38 + }, + { + "bbox": [ + 378, + 244, + 493, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 244, + 493, + 262 + ], + "spans": [ + { + "bbox": [ + 378, + 244, + 493, + 262 + ], + "type": "text", + "content": "What is the RGB value of the given color in the image?" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 379, + 264, + 465, + 281 + ], + "type": "list", + "angle": 0, + "index": 44, + "blocks": [ + { + "bbox": [ + 379, + 264, + 402, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 379, + 264, + 402, + 271 + ], + "spans": [ + { + "bbox": [ + 379, + 264, + 402, + 271 + ], + "type": "text", + "content": "A: [255, 0]" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 406, + 264, + 461, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 264, + 461, + 271 + ], + "spans": [ + { + "bbox": [ + 406, + 264, + 461, + 271 + ], + "type": "text", + "content": "123] B:[255,5,134]" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 379, + 274, + 402, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 379, + 274, + 402, + 281 + ], + "spans": [ + { + "bbox": [ + 379, + 274, + 402, + 281 + ], + "type": "text", + "content": "C: [255, C]" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 406, + 274, + 465, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 274, + 465, + 281 + ], + "spans": [ + { + "bbox": [ + 406, + 274, + 465, + 281 + ], + "type": "text", + "content": "128] D: [130, 22, 121]" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 440, + 274, + 447, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 440, + 274, + 447, + 281 + ], + "spans": [ + { + "bbox": [ + 440, + 274, + 447, + 281 + ], + "type": "text", + "content": "0,2" + } + ] + } + ], + "index": 45 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 460, + 274, + 467, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 460, + 274, + 467, + 281 + ], + "spans": [ + { + "bbox": [ + 460, + 274, + 467, + 281 + ], + "type": "text", + "content": "[1]" + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 476, + 274, + 495, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 274, + 495, + 281 + ], + "spans": [ + { + "bbox": [ + 476, + 274, + 495, + 281 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 369, + 290, + 446, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 369, + 290, + 446, + 298 + ], + "spans": [ + { + "bbox": [ + 369, + 290, + 446, + 298 + ], + "type": "text", + "content": "100% (32/32) Models Correct" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 370, + 361, + 445, + 374 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 361, + 445, + 374 + ], + "spans": [ + { + "bbox": [ + 370, + 361, + 445, + 374 + ], + "type": "text", + "content": "Color Proportion" + } + ] + } + ], + "index": 50 + }, + { + "type": "image", + "bbox": [ + 323, + 377, + 384, + 426 + ], + "blocks": [ + { + "bbox": [ + 323, + 377, + 384, + 426 + ], + "lines": [ + { + "bbox": [ + 323, + 377, + 384, + 426 + ], + "spans": [ + { + "bbox": [ + 323, + 377, + 384, + 426 + ], + "type": "image", + "image_path": "f8311e3191d139ac45e8ee7cb08317769455d589dbba3eb7439d3d777d7f5c25.jpg" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "lines": [ + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "spans": [ + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "type": "text", + "content": "Figure 76: Color Proportion case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_caption" + } + ], + "index": 51 + }, + { + "bbox": [ + 392, + 388, + 489, + 406 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 392, + 388, + 489, + 406 + ], + "spans": [ + { + "bbox": [ + 392, + 388, + 489, + 406 + ], + "type": "text", + "content": "Which is the dominant colors in this painting?" + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 392, + 407, + 496, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 392, + 407, + 496, + 415 + ], + "spans": [ + { + "bbox": [ + 392, + 407, + 496, + 415 + ], + "type": "text", + "content": "A:Warm B:Cool Ans:B" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 367, + 429, + 449, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 367, + 429, + 449, + 437 + ], + "spans": [ + { + "bbox": [ + 367, + 429, + 449, + 437 + ], + "type": "text", + "content": "84.38% (27/32) Models Correct" + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 372, + 489, + 445, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 489, + 445, + 502 + ], + "spans": [ + { + "bbox": [ + 372, + 489, + 445, + 502 + ], + "type": "text", + "content": "Object Counting" + } + ] + } + ], + "index": 56 + }, + { + "type": "image", + "bbox": [ + 318, + 505, + 391, + 554 + ], + "blocks": [ + { + "bbox": [ + 318, + 505, + 391, + 554 + ], + "lines": [ + { + "bbox": [ + 318, + 505, + 391, + 554 + ], + "spans": [ + { + "bbox": [ + 318, + 505, + 391, + 554 + ], + "type": "image", + "image_path": "8ddf130654105ff421c74eaa6bc175d1f7e1f67fa5d4a49338fda957ed70da93.jpg" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "lines": [ + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "spans": [ + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "type": "text", + "content": "Figure 78: Object Counting case that majority of VLMs provide correct results." + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_caption" + } + ], + "index": 57 + }, + { + "bbox": [ + 394, + 510, + 492, + 527 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 510, + 492, + 527 + ], + "spans": [ + { + "bbox": [ + 394, + 510, + 492, + 527 + ], + "type": "text", + "content": "How many cows have white faces in this image?" + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 394, + 529, + 429, + 547 + ], + "type": "list", + "angle": 0, + "index": 61, + "blocks": [ + { + "bbox": [ + 394, + 529, + 429, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 529, + 429, + 536 + ], + "spans": [ + { + "bbox": [ + 394, + 529, + 429, + 536 + ], + "type": "text", + "content": "A:3 B:5" + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 394, + 540, + 429, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 540, + 429, + 547 + ], + "spans": [ + { + "bbox": [ + 394, + 540, + 429, + 547 + ], + "type": "text", + "content": "C:2 D:4" + } + ] + } + ], + "index": 60 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 367, + 555, + 449, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 367, + 555, + 449, + 563 + ], + "spans": [ + { + "bbox": [ + 367, + 555, + 449, + 563 + ], + "type": "text", + "content": "93.75% (30/32) Models Correct" + } + ] + } + ], + "index": 62 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 72, + 183, + 84 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 72, + 183, + 84 + ], + "spans": [ + { + "bbox": [ + 105, + 72, + 183, + 84 + ], + "type": "text", + "content": "M.4 Easy Cases" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "37" + } + ] + } + ], + "index": 64 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 36 + }, + { + "para_blocks": [ + { + "bbox": [ + 164, + 86, + 240, + 96 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 86, + 240, + 96 + ], + "spans": [ + { + "bbox": [ + 164, + 86, + 240, + 96 + ], + "type": "text", + "content": "Color Robustness" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 115, + 102, + 179, + 150 + ], + "blocks": [ + { + "bbox": [ + 115, + 102, + 179, + 150 + ], + "lines": [ + { + "bbox": [ + 115, + 102, + 179, + 150 + ], + "spans": [ + { + "bbox": [ + 115, + 102, + 179, + 150 + ], + "type": "image", + "image_path": "5ae941e58227d111affb45babe2997419cc487c90c60451ef3c8a66ea499df26.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 188, + 106, + 274, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 106, + 274, + 124 + ], + "spans": [ + { + "bbox": [ + 188, + 106, + 274, + 124 + ], + "type": "text", + "content": "How many surfboards are in the image?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 188, + 126, + 222, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 126, + 222, + 134 + ], + "spans": [ + { + "bbox": [ + 188, + 126, + 222, + 134 + ], + "type": "text", + "content": "A:0 B:1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "spans": [ + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "type": "text", + "content": "C:3 D:2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 272, + 137, + 291, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 272, + 137, + 291, + 144 + ], + "spans": [ + { + "bbox": [ + 272, + 137, + 291, + 144 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 141, + 152, + 262, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 152, + 262, + 161 + ], + "spans": [ + { + "bbox": [ + 141, + 152, + 262, + 161 + ], + "type": "text", + "content": "96.88% (31/32) Model Predictions Unchanged" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 107, + 163, + 297, + 307 + ], + "blocks": [ + { + "bbox": [ + 107, + 163, + 297, + 307 + ], + "lines": [ + { + "bbox": [ + 107, + 163, + 297, + 307 + ], + "spans": [ + { + "bbox": [ + 107, + 163, + 297, + 307 + ], + "type": "image", + "image_path": "3572e92515871d9d01bdcccb23a43ae61d4e1f37446f28eca90df9ff3e009fd0.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 312, + 299, + 346 + ], + "lines": [ + { + "bbox": [ + 105, + 312, + 299, + 346 + ], + "spans": [ + { + "bbox": [ + 105, + 312, + 299, + 346 + ], + "type": "text", + "content": "Figure 80: Color Robustness case that majority of VLMs provide unchanged results over color variations in images." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "38" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 37 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 194, + 416, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 194, + 416, + 205 + ], + "spans": [ + { + "bbox": [ + 105, + 194, + 416, + 205 + ], + "type": "text", + "content": "We present samples cases that majority of VLMs reach the incorrect answers." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "spans": [ + { + "bbox": [ + 162, + 224, + 242, + 236 + ], + "type": "text", + "content": "Color Recognition" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 113, + 239, + 174, + 289 + ], + "blocks": [ + { + "bbox": [ + 113, + 239, + 174, + 289 + ], + "lines": [ + { + "bbox": [ + 113, + 239, + 174, + 289 + ], + "spans": [ + { + "bbox": [ + 113, + 239, + 174, + 289 + ], + "type": "image", + "image_path": "f4dea86aed5a3b69495e73a8418f4187c7d69c35973c70930d7fbeb813bebd7c.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 308, + 299, + 331 + ], + "lines": [ + { + "bbox": [ + 105, + 308, + 299, + 331 + ], + "spans": [ + { + "bbox": [ + 105, + 308, + 299, + 331 + ], + "type": "text", + "content": "Figure 81: Color Recognition case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 191, + 244, + 288, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 191, + 244, + 288, + 262 + ], + "spans": [ + { + "bbox": [ + 191, + 244, + 288, + 262 + ], + "type": "text", + "content": "What color of balloon is not present in this image?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 192, + 264, + 253, + 281 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 192, + 264, + 246, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 264, + 246, + 271 + ], + "spans": [ + { + "bbox": [ + 192, + 264, + 246, + 271 + ], + "type": "text", + "content": "A:Yellow B:Red" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 192, + 274, + 253, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 274, + 253, + 281 + ], + "spans": [ + { + "bbox": [ + 192, + 274, + 253, + 281 + ], + "type": "text", + "content": "C:Green D:Orange" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 270, + 275, + 290, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 275, + 290, + 281 + ], + "spans": [ + { + "bbox": [ + 270, + 275, + 290, + 281 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 159, + 290, + 245, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 290, + 245, + 298 + ], + "spans": [ + { + "bbox": [ + 159, + 290, + 245, + 298 + ], + "type": "text", + "content": "81.25% (26/32) Models Incorrect" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 160, + 361, + 244, + 373 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 361, + 244, + 373 + ], + "spans": [ + { + "bbox": [ + 160, + 361, + 244, + 373 + ], + "type": "text", + "content": "Object Recognition" + } + ] + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 112, + 377, + 187, + 426 + ], + "blocks": [ + { + "bbox": [ + 112, + 377, + 187, + 426 + ], + "lines": [ + { + "bbox": [ + 112, + 377, + 187, + 426 + ], + "spans": [ + { + "bbox": [ + 112, + 377, + 187, + 426 + ], + "type": "image", + "image_path": "c604546f1c6949ae3fda85b42ead50c4fdc739f20769821f286d365e3be8501c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "lines": [ + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "spans": [ + { + "bbox": [ + 105, + 447, + 299, + 470 + ], + "type": "text", + "content": "Figure 83: Object Recognition case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 189, + 383, + 282, + 400 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 383, + 282, + 400 + ], + "spans": [ + { + "bbox": [ + 189, + 383, + 282, + 400 + ], + "type": "text", + "content": "Which state is not light pink in this image?" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 189, + 403, + 230, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 403, + 230, + 410 + ], + "spans": [ + { + "bbox": [ + 189, + 403, + 230, + 410 + ], + "type": "text", + "content": "A:ID B:OK" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 189, + 413, + 233, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 413, + 233, + 419 + ], + "spans": [ + { + "bbox": [ + 189, + 413, + 233, + 419 + ], + "type": "text", + "content": "C:TX D:MO" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 269, + 413, + 289, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 269, + 413, + 289, + 419 + ], + "spans": [ + { + "bbox": [ + 269, + 413, + 289, + 419 + ], + "type": "text", + "content": "Ans: B" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 159, + 429, + 244, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 429, + 244, + 437 + ], + "spans": [ + { + "bbox": [ + 159, + 429, + 244, + 437 + ], + "type": "text", + "content": "93.75% (30/32) Models Incorrect" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 162, + 489, + 242, + 501 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 489, + 242, + 501 + ], + "spans": [ + { + "bbox": [ + 162, + 489, + 242, + 501 + ], + "type": "text", + "content": "Color Comparison" + } + ] + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 115, + 506, + 178, + 553 + ], + "blocks": [ + { + "bbox": [ + 115, + 506, + 178, + 553 + ], + "lines": [ + { + "bbox": [ + 115, + 506, + 178, + 553 + ], + "spans": [ + { + "bbox": [ + 115, + 506, + 178, + 553 + ], + "type": "image", + "image_path": "4490c2cd9c9e459ac009d48805da6dfe09196934a2f40d905b23b6a4a8734720.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "lines": [ + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 574, + 299, + 597 + ], + "type": "text", + "content": "Figure 85: Color Comparison case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "bbox": [ + 183, + 510, + 288, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 183, + 510, + 288, + 518 + ], + "spans": [ + { + "bbox": [ + 183, + 510, + 288, + 518 + ], + "type": "text", + "content": "Which species of wood has the darkest" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 184, + 520, + 255, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 520, + 255, + 526 + ], + "spans": [ + { + "bbox": [ + 184, + 520, + 255, + 526 + ], + "type": "text", + "content": "color overall in the image?" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 184, + 529, + 246, + 537 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 529, + 246, + 537 + ], + "spans": [ + { + "bbox": [ + 184, + 529, + 246, + 537 + ], + "type": "text", + "content": "A: Mohogany B: Maple" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 184, + 540, + 292, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 540, + 292, + 548 + ], + "spans": [ + { + "bbox": [ + 184, + 540, + 292, + 548 + ], + "type": "text", + "content": "C: Cherry D: Black Walnut Ans:A" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 159, + 556, + 245, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 556, + 245, + 564 + ], + "spans": [ + { + "bbox": [ + 159, + 556, + 245, + 564 + ], + "type": "text", + "content": "93.75% (30/32) Models Incorrect" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 165, + 616, + 239, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 165, + 616, + 239, + 628 + ], + "spans": [ + { + "bbox": [ + 165, + 616, + 239, + 628 + ], + "type": "text", + "content": "Object Counting" + } + ] + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 107, + 632, + 197, + 677 + ], + "blocks": [ + { + "bbox": [ + 107, + 632, + 197, + 677 + ], + "lines": [ + { + "bbox": [ + 107, + 632, + 197, + 677 + ], + "spans": [ + { + "bbox": [ + 107, + 632, + 197, + 677 + ], + "type": "image", + "image_path": "68369a8c851cd837e725607be10b511eb165a17d79753d2e8fc937aa32ff033e.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "lines": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "spans": [ + { + "bbox": [ + 105, + 700, + 299, + 724 + ], + "type": "text", + "content": "Figure 87: Object Counting case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + }, + { + "bbox": [ + 201, + 637, + 283, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 201, + 637, + 283, + 645 + ], + "spans": [ + { + "bbox": [ + 201, + 637, + 283, + 645 + ], + "type": "text", + "content": "How many people are wearing" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 202, + 647, + 288, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 647, + 288, + 654 + ], + "spans": [ + { + "bbox": [ + 202, + 647, + 288, + 654 + ], + "type": "text", + "content": "red striped shirts in this image?" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 202, + 656, + 268, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 656, + 268, + 664 + ], + "spans": [ + { + "bbox": [ + 202, + 656, + 268, + 664 + ], + "type": "text", + "content": "A:10 B:15 C:12" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 203, + 667, + 292, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 667, + 292, + 674 + ], + "spans": [ + { + "bbox": [ + 203, + 667, + 292, + 674 + ], + "type": "text", + "content": "D:14 E:13 Ans:B" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 159, + 683, + 244, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 683, + 244, + 691 + ], + "spans": [ + { + "bbox": [ + 159, + 683, + 244, + 691 + ], + "type": "text", + "content": "84.38% (27/32) Models Incorrect" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "spans": [ + { + "bbox": [ + 371, + 224, + 445, + 235 + ], + "type": "text", + "content": "Color Extraction" + } + ] + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 319, + 240, + 367, + 288 + ], + "blocks": [ + { + "bbox": [ + 319, + 240, + 367, + 288 + ], + "lines": [ + { + "bbox": [ + 319, + 240, + 367, + 288 + ], + "spans": [ + { + "bbox": [ + 319, + 240, + 367, + 288 + ], + "type": "image", + "image_path": "6ac99a22232582c2709764426a74b3929527f2c67331b182d48cc11147f98a7d.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 308, + 504, + 342 + ], + "lines": [ + { + "bbox": [ + 310, + 308, + 504, + 342 + ], + "spans": [ + { + "bbox": [ + 310, + 308, + 504, + 342 + ], + "type": "text", + "content": "Figure 82: Color Extraction case that majority of VLMs provide incorrect results. Option backgrounds correspond to their color codes." + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + }, + { + "bbox": [ + 371, + 244, + 496, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 371, + 244, + 496, + 262 + ], + "spans": [ + { + "bbox": [ + 371, + 244, + 496, + 262 + ], + "type": "text", + "content": "What is the RGB value of the given color in the image?" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 372, + 264, + 417, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 264, + 417, + 272 + ], + "spans": [ + { + "bbox": [ + 372, + 264, + 417, + 272 + ], + "type": "text", + "content": "A: [121, 151, 181]" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 372, + 274, + 416, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 274, + 416, + 281 + ], + "spans": [ + { + "bbox": [ + 372, + 274, + 416, + 281 + ], + "type": "text", + "content": "C: [123, 150, 181]" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 421, + 264, + 460, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 421, + 264, + 460, + 272 + ], + "spans": [ + { + "bbox": [ + 421, + 264, + 460, + 272 + ], + "type": "text", + "content": "B: [55, 32, 102]" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 422, + 274, + 466, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 422, + 274, + 466, + 281 + ], + "spans": [ + { + "bbox": [ + 422, + 274, + 466, + 281 + ], + "type": "text", + "content": "D: [119, 150, 181]" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 476, + 275, + 495, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 476, + 275, + 495, + 281 + ], + "spans": [ + { + "bbox": [ + 476, + 275, + 495, + 281 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 365, + 290, + 451, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 290, + 451, + 298 + ], + "spans": [ + { + "bbox": [ + 365, + 290, + 451, + 298 + ], + "type": "text", + "content": "84.38% (27/32) Models Incorrect" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 370, + 361, + 445, + 373 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 361, + 445, + 373 + ], + "spans": [ + { + "bbox": [ + 370, + 361, + 445, + 373 + ], + "type": "text", + "content": "Color Proportion" + } + ] + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 329, + 380, + 373, + 424 + ], + "blocks": [ + { + "bbox": [ + 329, + 380, + 373, + 424 + ], + "lines": [ + { + "bbox": [ + 329, + 380, + 373, + 424 + ], + "spans": [ + { + "bbox": [ + 329, + 380, + 373, + 424 + ], + "type": "image", + "image_path": "a2c419157f2bc41f0f9c9eaf839dda398140045d21b4e420b187173691dc537b.jpg" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "lines": [ + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "spans": [ + { + "bbox": [ + 310, + 447, + 506, + 470 + ], + "type": "text", + "content": "Figure 84: Color Proportion case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 52, + "angle": 0, + "type": "image_caption" + } + ], + "index": 46 + }, + { + "bbox": [ + 392, + 384, + 484, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 392, + 384, + 484, + 402 + ], + "spans": [ + { + "bbox": [ + 392, + 384, + 484, + 402 + ], + "type": "text", + "content": "What color in the pie chart has the proportion closest to " + }, + { + "bbox": [ + 392, + 384, + 484, + 402 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 392, + 384, + 484, + 402 + ], + "type": "text", + "content": "?" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 392, + 404, + 456, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 392, + 404, + 456, + 411 + ], + "spans": [ + { + "bbox": [ + 392, + 404, + 456, + 411 + ], + "type": "text", + "content": "A: dark green B: purple" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 393, + 414, + 434, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 393, + 414, + 434, + 421 + ], + "spans": [ + { + "bbox": [ + 393, + 414, + 434, + 421 + ], + "type": "text", + "content": "C:orange" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 432, + 414, + 496, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 432, + 414, + 496, + 421 + ], + "spans": [ + { + "bbox": [ + 432, + 414, + 496, + 421 + ], + "type": "text", + "content": "D:light pink Ans:A" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 365, + 429, + 451, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 429, + 451, + 437 + ], + "spans": [ + { + "bbox": [ + 365, + 429, + 451, + 437 + ], + "type": "text", + "content": "87.50% (28/32) Models Incorrect" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 374, + 489, + 443, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 374, + 489, + 443, + 502 + ], + "spans": [ + { + "bbox": [ + 374, + 489, + 443, + 502 + ], + "type": "text", + "content": "Color Counting" + } + ] + } + ], + "index": 53 + }, + { + "type": "image", + "bbox": [ + 329, + 505, + 378, + 554 + ], + "blocks": [ + { + "bbox": [ + 329, + 505, + 378, + 554 + ], + "lines": [ + { + "bbox": [ + 329, + 505, + 378, + 554 + ], + "spans": [ + { + "bbox": [ + 329, + 505, + 378, + 554 + ], + "type": "image", + "image_path": "044a3e9390bc271d50f8b94636d4aed59065241b215be8b3b8301c6e10433923.jpg" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "lines": [ + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "spans": [ + { + "bbox": [ + 310, + 574, + 504, + 597 + ], + "type": "text", + "content": "Figure 86: Color Counting case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_caption" + } + ], + "index": 54 + }, + { + "bbox": [ + 394, + 510, + 485, + 527 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 510, + 485, + 527 + ], + "spans": [ + { + "bbox": [ + 394, + 510, + 485, + 527 + ], + "type": "text", + "content": "How many colors are there in this image?" + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 394, + 529, + 434, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 529, + 434, + 536 + ], + "spans": [ + { + "bbox": [ + 394, + 529, + 434, + 536 + ], + "type": "text", + "content": "A:10 B:11" + } + ] + } + ], + "index": 56 + }, + { + "bbox": [ + 394, + 540, + 435, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 540, + 435, + 547 + ], + "spans": [ + { + "bbox": [ + 394, + 540, + 435, + 547 + ], + "type": "text", + "content": "C:12 D:13" + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 477, + 540, + 498, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 477, + 540, + 498, + 547 + ], + "spans": [ + { + "bbox": [ + 477, + 540, + 498, + 547 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 365, + 556, + 451, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 556, + 451, + 563 + ], + "spans": [ + { + "bbox": [ + 365, + 556, + 451, + 563 + ], + "type": "text", + "content": "81.25% (26/32) Models Incorrect" + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 378, + 616, + 438, + 627 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 616, + 438, + 627 + ], + "spans": [ + { + "bbox": [ + 378, + 616, + 438, + 627 + ], + "type": "text", + "content": "Color Illusion" + } + ] + } + ], + "index": 61 + }, + { + "type": "image", + "bbox": [ + 364, + 635, + 391, + 651 + ], + "blocks": [ + { + "bbox": [ + 364, + 635, + 391, + 651 + ], + "lines": [ + { + "bbox": [ + 364, + 635, + 391, + 651 + ], + "spans": [ + { + "bbox": [ + 364, + 635, + 391, + 651 + ], + "type": "image", + "image_path": "1fbc43e9ddcc3682c48ad4d4bda6b0089d535e6580050c40ed07dfb19a03244f.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 700, + 505, + 724 + ], + "lines": [ + { + "bbox": [ + 310, + 700, + 505, + 724 + ], + "spans": [ + { + "bbox": [ + 310, + 700, + 505, + 724 + ], + "type": "text", + "content": "Figure 88: Color Illusion case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 67, + "angle": 0, + "type": "image_caption" + } + ], + "index": 62 + }, + { + "bbox": [ + 315, + 651, + 499, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 651, + 499, + 670 + ], + "spans": [ + { + "bbox": [ + 315, + 651, + 499, + 670 + ], + "type": "text", + "content": "Which circles has the darkest color? The circles are numbered left to right starting from 1." + } + ] + } + ], + "index": 63 + }, + { + "bbox": [ + 316, + 670, + 403, + 678 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 403, + 678 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 403, + 678 + ], + "type": "text", + "content": "A: All the same B: 1 C: 2 D: 3" + } + ] + } + ], + "index": 64 + }, + { + "bbox": [ + 478, + 672, + 498, + 678 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 478, + 672, + 498, + 678 + ], + "spans": [ + { + "bbox": [ + 478, + 672, + 498, + 678 + ], + "type": "text", + "content": "Ans: A" + } + ] + } + ], + "index": 65 + }, + { + "bbox": [ + 365, + 683, + 451, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 683, + 451, + 691 + ], + "spans": [ + { + "bbox": [ + 365, + 683, + 451, + 691 + ], + "type": "text", + "content": "84.38% (27/32) Models Incorrect" + } + ] + } + ], + "index": 66 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 72, + 197, + 83 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 72, + 197, + 83 + ], + "spans": [ + { + "bbox": [ + 105, + 72, + 197, + 83 + ], + "type": "text", + "content": "M.5 Difficult Cases" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "39" + } + ] + } + ], + "index": 68 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 38 + }, + { + "para_blocks": [ + { + "bbox": [ + 169, + 86, + 234, + 98 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 169, + 86, + 234, + 98 + ], + "spans": [ + { + "bbox": [ + 169, + 86, + 234, + 98 + ], + "type": "text", + "content": "Color Mimicry" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 115, + 102, + 181, + 151 + ], + "blocks": [ + { + "bbox": [ + 115, + 102, + 181, + 151 + ], + "lines": [ + { + "bbox": [ + 115, + 102, + 181, + 151 + ], + "spans": [ + { + "bbox": [ + 115, + 102, + 181, + 151 + ], + "type": "image", + "image_path": "98144762f3decf4a41b12421a071fae0f2efb49798648fc249f128248a04379b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 171, + 299, + 194 + ], + "lines": [ + { + "bbox": [ + 105, + 171, + 299, + 194 + ], + "spans": [ + { + "bbox": [ + 105, + 171, + 299, + 194 + ], + "type": "text", + "content": "Figure 89: Color Mimicry case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 188, + 106, + 279, + 115 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 106, + 279, + 115 + ], + "spans": [ + { + "bbox": [ + 188, + 106, + 279, + 115 + ], + "type": "text", + "content": "How many leaves in this images?" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 188, + 126, + 223, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 126, + 223, + 134 + ], + "spans": [ + { + "bbox": [ + 188, + 126, + 223, + 134 + ], + "type": "text", + "content": "A:1 B:2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "spans": [ + { + "bbox": [ + 188, + 136, + 223, + 144 + ], + "type": "text", + "content": "C:3 D:0" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 272, + 137, + 292, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 272, + 137, + 292, + 144 + ], + "spans": [ + { + "bbox": [ + 272, + 137, + 292, + 144 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 159, + 153, + 246, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 153, + 246, + 160 + ], + "spans": [ + { + "bbox": [ + 159, + 153, + 246, + 160 + ], + "type": "text", + "content": "93.75% (30/32) Models Incorrect" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 164, + 213, + 240, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 213, + 240, + 224 + ], + "spans": [ + { + "bbox": [ + 164, + 213, + 240, + 224 + ], + "type": "text", + "content": "Color Robustness" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 113, + 228, + 178, + 277 + ], + "blocks": [ + { + "bbox": [ + 113, + 228, + 178, + 277 + ], + "lines": [ + { + "bbox": [ + 113, + 228, + 178, + 277 + ], + "spans": [ + { + "bbox": [ + 113, + 228, + 178, + 277 + ], + "type": "image", + "image_path": "77c50998e72c23283fffdda7e005402e9f20f449948f2a3e900b1576dd0a4670.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 188, + 234, + 288, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 234, + 288, + 242 + ], + "spans": [ + { + "bbox": [ + 188, + 234, + 288, + 242 + ], + "type": "text", + "content": "How many oranges are in the image?" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 188, + 254, + 223, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 254, + 223, + 261 + ], + "spans": [ + { + "bbox": [ + 188, + 254, + 223, + 261 + ], + "type": "text", + "content": "A:3 B:2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 188, + 263, + 223, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 188, + 263, + 223, + 270 + ], + "spans": [ + { + "bbox": [ + 188, + 263, + 223, + 270 + ], + "type": "text", + "content": "C:0 D:1" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 270, + 264, + 290, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 270, + 264, + 290, + 271 + ], + "spans": [ + { + "bbox": [ + 270, + 264, + 290, + 271 + ], + "type": "text", + "content": "Ans: D" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 146, + 280, + 257, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 280, + 257, + 288 + ], + "spans": [ + { + "bbox": [ + 146, + 280, + 257, + 288 + ], + "type": "text", + "content": "87.5% (28/32) Model Predictions Changed" + } + ] + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 107, + 291, + 171, + 341 + ], + "blocks": [ + { + "bbox": [ + 107, + 291, + 171, + 341 + ], + "lines": [ + { + "bbox": [ + 107, + 291, + 171, + 341 + ], + "spans": [ + { + "bbox": [ + 107, + 291, + 171, + 341 + ], + "type": "image", + "image_path": "47755c50e216e38cba801eee7b315dcd85721a9a1c2d99185a32993ea1e1cd99.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 171, + 291, + 233, + 341 + ], + "blocks": [ + { + "bbox": [ + 171, + 291, + 233, + 341 + ], + "lines": [ + { + "bbox": [ + 171, + 291, + 233, + 341 + ], + "spans": [ + { + "bbox": [ + 171, + 291, + 233, + 341 + ], + "type": "image", + "image_path": "84c2db9d5a80d263845b18c2ee3ce2e1b09547836b3991b115293dfde12d4802.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 233, + 291, + 296, + 341 + ], + "blocks": [ + { + "bbox": [ + 233, + 291, + 296, + 341 + ], + "lines": [ + { + "bbox": [ + 233, + 291, + 296, + 341 + ], + "spans": [ + { + "bbox": [ + 233, + 291, + 296, + 341 + ], + "type": "image", + "image_path": "72cc33c5424e4708aed7e08b3feb5e2efc2bd986d12dd679390a04c8a34eee34.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 107, + 341, + 171, + 387 + ], + "blocks": [ + { + "bbox": [ + 107, + 341, + 171, + 387 + ], + "lines": [ + { + "bbox": [ + 107, + 341, + 171, + 387 + ], + "spans": [ + { + "bbox": [ + 107, + 341, + 171, + 387 + ], + "type": "image", + "image_path": "404382bed045c853b6acbb325ddab0c9b4b919d9a1394ebeb299c44ae8243b68.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 171, + 341, + 233, + 387 + ], + "blocks": [ + { + "bbox": [ + 171, + 341, + 233, + 387 + ], + "lines": [ + { + "bbox": [ + 171, + 341, + 233, + 387 + ], + "spans": [ + { + "bbox": [ + 171, + 341, + 233, + 387 + ], + "type": "image", + "image_path": "842554f848f7ed3aa48a1a5f8d02ec7235d43967ba88c0a851be5a3e459001ce.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 233, + 341, + 296, + 387 + ], + "blocks": [ + { + "bbox": [ + 233, + 341, + 296, + 387 + ], + "lines": [ + { + "bbox": [ + 233, + 341, + 296, + 387 + ], + "spans": [ + { + "bbox": [ + 233, + 341, + 296, + 387 + ], + "type": "image", + "image_path": "9214e9d649999303fdb7b50dea46807402e5029545857d29a7aa3dd11583cc07.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 107, + 387, + 171, + 434 + ], + "blocks": [ + { + "bbox": [ + 107, + 387, + 171, + 434 + ], + "lines": [ + { + "bbox": [ + 107, + 387, + 171, + 434 + ], + "spans": [ + { + "bbox": [ + 107, + 387, + 171, + 434 + ], + "type": "image", + "image_path": "d3ebda281ef87ad9b63c21a331d2dc3fdec78569cd48c9b24e7942452278e4c8.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 440, + 299, + 474 + ], + "lines": [ + { + "bbox": [ + 105, + 440, + 299, + 474 + ], + "spans": [ + { + "bbox": [ + 105, + 440, + 299, + 474 + ], + "type": "text", + "content": "Figure 91: Color Robustness case that majority of VLMs change the answers over color variations in images." + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 171, + 387, + 233, + 434 + ], + "blocks": [ + { + "bbox": [ + 171, + 387, + 233, + 434 + ], + "lines": [ + { + "bbox": [ + 171, + 387, + 233, + 434 + ], + "spans": [ + { + "bbox": [ + 171, + 387, + 233, + 434 + ], + "type": "image", + "image_path": "75580cbd46f4eb6223dad32405191521ace1a32d6bd2a48373612828dc35e03d.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 233, + 387, + 296, + 434 + ], + "blocks": [ + { + "bbox": [ + 233, + 387, + 296, + 434 + ], + "lines": [ + { + "bbox": [ + 233, + 387, + 296, + 434 + ], + "spans": [ + { + "bbox": [ + 233, + 387, + 296, + 434 + ], + "type": "image", + "image_path": "0c86a9d883b612687f8ff4b291891c2f0c0d2c22661e8d1c674bee668f20a4af.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "bbox": [ + 373, + 87, + 443, + 97 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 373, + 87, + 443, + 97 + ], + "spans": [ + { + "bbox": [ + 373, + 87, + 443, + 97 + ], + "type": "text", + "content": "Color Blindness" + } + ] + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 327, + 102, + 373, + 148 + ], + "blocks": [ + { + "bbox": [ + 327, + 102, + 373, + 148 + ], + "lines": [ + { + "bbox": [ + 327, + 102, + 373, + 148 + ], + "spans": [ + { + "bbox": [ + 327, + 102, + 373, + 148 + ], + "type": "image", + "image_path": "8222b662278c709963b95dccbd5a7c7773900405a26a0a11bdf9501133024074.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 310, + 171, + 504, + 194 + ], + "lines": [ + { + "bbox": [ + 310, + 171, + 504, + 194 + ], + "spans": [ + { + "bbox": [ + 310, + 171, + 504, + 194 + ], + "type": "text", + "content": "Figure 90: Color Blindness case that majority of VLMs provide incorrect results." + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 26 + }, + { + "bbox": [ + 394, + 106, + 488, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 106, + 488, + 114 + ], + "spans": [ + { + "bbox": [ + 394, + 106, + 488, + 114 + ], + "type": "text", + "content": "What is the number in the center of" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 394, + 116, + 427, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 116, + 427, + 124 + ], + "spans": [ + { + "bbox": [ + 394, + 116, + 427, + 124 + ], + "type": "text", + "content": "this image?" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 394, + 126, + 407, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 126, + 407, + 133 + ], + "spans": [ + { + "bbox": [ + 394, + 126, + 407, + 133 + ], + "type": "text", + "content": "A:2" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 394, + 136, + 413, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 394, + 136, + 413, + 144 + ], + "spans": [ + { + "bbox": [ + 394, + 136, + 413, + 144 + ], + "type": "text", + "content": "C:22" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 420, + 137, + 435, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 420, + 137, + 435, + 144 + ], + "spans": [ + { + "bbox": [ + 420, + 137, + 435, + 144 + ], + "type": "text", + "content": "D:26" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 477, + 137, + 497, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 477, + 137, + 497, + 144 + ], + "spans": [ + { + "bbox": [ + 477, + 137, + 497, + 144 + ], + "type": "text", + "content": "Ans: C" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 365, + 153, + 451, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 365, + 153, + 451, + 160 + ], + "spans": [ + { + "bbox": [ + 365, + 153, + 451, + 160 + ], + "type": "text", + "content": "87.50% (28/32) Models Incorrect" + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "40" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 39 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_content_list.json b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bf570ba9b83ba693fb9393388cfe29ac11c97416 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_content_list.json @@ -0,0 +1,4211 @@ +[ + { + "type": "text", + "text": "SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models", + "text_level": 1, + "bbox": [ + 171, + 98, + 782, + 142 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Hardy Chen $^{2*}$ , Haoqin Tu $^{1*}$ , Fali Wang $^{3}$ , Hui Liu $^{4}$ , Xianfeng Tang $^{4}$ , Xinya Du $^{2}$ , Yuyin Zhou $^{1}$ , Cihang Xie $^{1}$", + "bbox": [ + 179, + 165, + 789, + 202 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1 University of California, Santa Cruz 2 University of Texas at Dallas", + "3 The Pennsylvania State University 4 Amazon Research" + ], + "bbox": [ + 181, + 203, + 694, + 236 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/", + "7B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B", + "3B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B", + "Dataset: https://huggingface.co/datasets/UCSC-VLAA/VLAA-Thinkin" + ], + "bbox": [ + 227, + 248, + 753, + 311 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 459, + 345, + 539, + 363 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "This work revisits the dominant supervised fine-tuning (SFT) then reinforcement learning (RL) paradigm for training Large Vision-Language Models (LVLMs), and reveals a key finding: SFT can significantly undermine subsequent RL by inducing \"pseudo reasoning paths\" imitated from expert models. While these paths may resemble the native reasoning paths of RL models, they often involve prolonged, hesitant, less informative steps, and incorrect reasoning. To systematically study this effect, we introduce VLAA-Thinking, a new multimodal dataset designed to support reasoning in LVLMs. Constructed via a six-step pipeline involving captioning, reasoning distillation, answer rewrite and verification, VLAA-Thinkings comprises high-quality, step-by-step visual reasoning traces for SFT, along with a more challenging RL split from the same data source. Using this dataset, we conduct extensive experiments comparing SFT, RL and their combinations. Results show that while SFT helps models learn reasoning formats, it often locks aligned models into imitative, rigid reasoning modes that impede further learning. In contrast, building on the Group Relative Policy Optimization (GRPO) with a novel mixed reward module integrating both perception and cognition signals, our RL approach fosters more genuine, adaptive reasoning behavior. Notably, our model VLAA-Thinker, based on Qwen2.5VL 3B, achieves top-1 performance on Open LMM Reasoning Leaderboard1 among 4B scale LVLMs, surpassing the previous state-of-the-art by $1.8\\%$ . We hope our findings provide valuable insights in developing reasoning-capable LVLMs and can inform future research in this area.", + "bbox": [ + 228, + 380, + 769, + 704 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 744, + 346, + 762 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large Language Models (LLMs) with strong reasoning capability have recently gained wide attention with the emergence of OpenAI's o1/o3 and Deepseek-R1 (Guo et al., 2025; Jaech et al., 2024). A common practice to empower models with reasoning abilities comprises two steps: supervised fine-tuning (SFT) on reasoning data, followed by reinforcement learning (RL) to further boost performance. This successful paradigm has inspired efforts to extend these strengths beyond textual domains to Large Vision-Language Models (LVLMs) (Peng et al., 2025; Chen et al., 2025a; Deng et al., 2025b; Shen et al., 2025; Yang et al., 2025b).", + "bbox": [ + 169, + 782, + 826, + 883 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2504.11468v1 [cs.CL] 10 Apr 2025", + "bbox": [ + 22, + 265, + 60, + 707 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Equal contribution.", + "bbox": [ + 194, + 896, + 331, + 910 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "1https://huggingface.co/spaces/opencompass/Open_LMM_Reasoning_Leaderboard", + "bbox": [ + 194, + 910, + 728, + 922 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg", + "image_caption": [ + "Figure 1: Examples from LVLMs trained with different strategies for reasoning Left: response from a model trained with SFT, showing pseudo reasoning traces and a number of pseudo self-reflective cues (i.e., aha-moments) imitated from R1. Right: response from a model trained with RL, showing native reasoning ability and authentic aha-moments emerged from RL training. Wrong reasoning steps are colored red and aha-moments are highlighted." + ], + "image_footnote": [], + "bbox": [ + 173, + 77, + 823, + 262 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we take a step further and examine whether the widely adopted \"SFT then RL\" paradigm similarly benefits the development of reasoning-capable LVLMs. Specifically, we ask: 1) What are the distinct effects of SFT and RL in multimodal reasoning? and 2) Is this two-stage paradigm truly necessary for reasoning in LVLMs? To systematically explore these questions, we curate VLAA-Thinkinq, the first comprehensive and high-quality image-text reasoning dataset explicitly designed to support both SFT and RL. Unlike prior datasets, VLAA-Thinkinq includes detailed, step-by-step reasoning traces derived from the R1-style \"think-then-speak\" intermediate reasoning. We construct a dedicated SFT split featuring multimodal chain-of-thought (CoT) examples suitable for visual instruction tuning, alongside a more challenging RL split curated from the same source encourage deeper and more adaptive reasoning behaviors. To effectively transfer reasoning capabilities from text-only models to the multimodal domain, we construct our dataset through a six-stage pipeline: metadata collection, image captioning, R1-based distillation, answer rewriting, verification, and split curation. Specifically, we input image captions and visual questions into DeepSeek-R1 to generate initial reasoning traces. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier, resulting in high-quality multimodal reasoning dataset for SFT and RL.", + "bbox": [ + 169, + 381, + 826, + 619 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Next, we carefully ablate the role of SFT, RL and their combinations in multimodal reasoning using our VLAA-Thinking dataset. To better understand the role of SFT, we perform a detailed analysis, systematically examining the impact of SFT data type (e.g., with and without the self-reflective \"aha moments\"), dataset scale, and model capacity. To explore the potential of RL in the vision-language context, we design a novel mixed reward function within the Group Relative Policy Optimization (GRPO) (Shao et al., 2024) framework that involves both perception and cognition rewards to incentivize the model to produce well-reasoned answers. Specifically, our mixed reward signal blends 2 types of reward with 5 types of functions. For rule-based questions, there are functions for digit, multiple-choice, math and bounding box outputs. For open-ended questions, we adopt a competent reward model, XComposer-2.5-RM (Zang et al., 2025), along with a reference-based reward method to score an answer. We then closely investigate the effects of different reward functions, base models, and the interaction between SFT and GRPO to further optimize reasoning capabilities.", + "bbox": [ + 169, + 625, + 826, + 806 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our extensive experiments comparing SFT and RL reveal several noteworthy insights. First, we probe the contribution of SFT and RL in multimodal reasoning: while SFT improves performance on standard tasks over the base model, it falls short in enhancing complex reasoning. Merely imitating an expert's thinking through SFT often induces \"pseudo reasoning paths\", a superficial reasoning pattern which may contain \"pseudo aha moments\" (superficial self-reflective cues), as illustrated in Figure 1. We show that these imitated reasoning patterns can hinder genuine reasoning advancement, i.e., $47\\%$ relative performance drop on 7B models. This observation is also in line with recent studies highlighting the need for", + "bbox": [ + 169, + 811, + 826, + 925 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg", + "image_caption": [ + "Figure 2: Data generation pipeline. We first generate initial reasoning traces by feeding detailed captions and visual questions into DeepSeek-R1. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier. The resulting data is split into VLAA-Thinking-SFT and VLAA-Thinking-RL." + ], + "image_footnote": [], + "bbox": [ + 174, + 75, + 823, + 247 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "feedback and exploration signals to drive advanced reasoning behaviors (Peng et al., 2025). Additionally, our ablations show that for rule-based rewards, math and multiple-choice are more beneficial than others, and that a combination of both rule-based and open-ended rewards yields the best performance.", + "bbox": [ + 169, + 342, + 823, + 400 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While prior work suggests that SFT followed by RL in LVLMs offers the best of both worlds (Guo et al., 2025; Yang et al., 2025b; Deng et al., 2025b)—first mimicking good reasoning format, then refining via RL feedback, we find that applying SFT before GRPO hurts performance on aligned models, with an average $12.7\\%$ drop, and even a smaller scale SFT leads to a similar decline. Regarding model size, larger models cannot immune from the degeneration brought by SFT, as 7B models share almost the same performance drop with their smaller counterparts. Finally, examining the training procedure, we observe little correlation between response length, reward, and performance—SFT-ed models get higher initial rewards and longer response yet underperform RL-trained ones, contrasting with the previous observation that better models usually produce longer answers with higher RL reward (Guo et al., 2025; Peng et al., 2025).", + "bbox": [ + 169, + 404, + 826, + 559 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To summarize, while SFT helps unaligned models follow instructions, it limits exploration during RL by promoting imitative reasoning. In contrast, learning directly from reward signals yields more effective and adaptable thinking behavior. Empirically, direct RL proves superior. Our model, VLAA-Thinker-Qwen2.5VL-3B, achieves the top-1 performance on the Open LMM Reasoning Leaderboard among 4B-scale LVLMs, surpassing the previous state-of-the-art by $1.8\\%$ . Our case study further emphasizes these gains with more concise, effective reasoning traces presented in model answers.", + "bbox": [ + 169, + 564, + 826, + 662 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 The VLAA-Thinking Dataset", + "text_level": 1, + "bbox": [ + 171, + 694, + 500, + 713 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To systematically evaluate the \"SFT then RL\" paradigm for developing reasoning capabilities in LVLMs, we construct VLAA-Thinking, a dataset that consists of two parts: 1) VLAA-Thinking-SFT which captures step-by-step reasoning grounded in visual inputs for SFT, and 2) VLAA-Thinking-RL which contains challenging samples designed specifically for RL. Our data generation pipeline is designed to transfer reasoning capabilities from a powerful text-only model to the multimodal domain through a structured, multi-stage process. The entire pipeline, as illustrated in Figure 2, consists of six key components:", + "bbox": [ + 169, + 733, + 826, + 833 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "#1: Metadata Collection We collect metadata from 9 vision-language datasets featuring either closed- or open-ended questions. Specifically, we sample data containing unique images from CLEVR-Math (Lindström & Abraham, 2022), Math PUMA (Zhuang et al., 2024), ArxivQA (Li et al., 2024a), DocVQA (Mathew et al., 2021), VizWiz (Gurari et al., 2018), and ALLaVA (Chen et al., 2024a), and process them through our complete data pipeline. In addition, we directly adopt COCO and VisualGenome data from LLaVA-CoT (Xu et al.,", + "bbox": [ + 169, + 839, + 826, + 925 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 946, + 503, + 959 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/b3620181fc95c9bc02765935a760c35c491f17445af297296fb814b062cdd344.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
NameData Type#Ori.#Pipeline#Final SFT#Final RL
Collected from Distilling R1
CLEVR-MathClosed-end35,00028,0185,9232,000
GeoQA170KClosed-end---6,499
Math PUMAClosed-end30,00026,67219,2586,696
ArxivQAClosed-end54,39951,34834,6041,000
DocVQAClosed-end10,1948,2064,8971,000
VizWizClosed-end20,5236,5284,2661,000
ALLaVA-LAIONOpen-end47,06618,12310,4963,000
Collected from LLaVA-CoT
COCOClosed-end3,0003,0008,7272,000
VisualGenomeClosed-end3,0003,00038,2422,000
TotalClosed- & Open-end203,182144,895126,41325,195
", + "bbox": [ + 176, + 75, + 820, + 282 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1: Data statistics of VLAA-Thinking. We present the original volume of metadata (#Ori.), the data size after the distillation pipeline (#Pipeline), the size of sampled examples for SFT (#Final SFT) and RL (#Final RL), respectively. Note that we only use GeoQA170K with verifiable answers for the RL split.", + "bbox": [ + 169, + 292, + 823, + 349 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2024). An exception is GeoQA170K (Gao et al., 2023), which we include only in the RL split due to persistent hallucination issues during captioning. Detailed statistics are in Table 1.", + "bbox": [ + 169, + 373, + 823, + 402 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "#2: Visual Input and Additional Information Each sample begins with an image, question, and its corresponding answer. To bridge the gap between the visual modality and language reasoning, we resort to GPT-4o to generate a detailed image caption describing the content in structured and semantically rich language (detailed prompts in Appendix A.1). During this process, we take full advantage of the provided knowledge in the data beyond just the GPT captions. In detail, we provide these dataset-specific information: (1) CLEVR-Math: Instructions for synthesizing the image from CLEVR (Johnson et al., 2017); (2) Math PUMA: Textual description of math problems in the image from the dataset itself. (3) ALLaVA-LAION: Fine-grained and verified GPT-4V captions from the original dataset.", + "bbox": [ + 169, + 409, + 826, + 536 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "#3: Reasoning Answer Distillation We utilize a strong text-only reasoning model: DeepSeek-R1 to generate thinking rationale and final answers. The model is provided with the image caption, the visual question, and additional information from certain datasets. It responds using a structured reasoning format that is between and tags and contains a sequence of logical steps leading to the final answer.", + "bbox": [ + 169, + 540, + 826, + 612 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "#4: Answer and Rewriting To enhance consistency and eliminate modality-specific artifacts, the raw reasoning answers generated by R1 are passed through a rewriting module (i.e., GPT-3.5-turbo (Brown et al., 2020) in our experiment). This module removes unnecessary phrases (e.g., references to \"caption\"), and ensures the answer adheres to a clean, instruction-following format based on the image. We further filter out samples with the sentence length gap larger than 15 words to ensure minimum modifications in this process.", + "bbox": [ + 169, + 617, + 826, + 702 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "#5: Automated Verification To assess whether the generated reasoning answers is correct regarding the groundtruth answer, we implement an automated verifier. This verifier compares the rewritten reasoning answer to the groundtruth of the visual question, determining whether the outputs are correct or incorrect. Only the examples that are verified as correct are retained as the final training data.", + "bbox": [ + 169, + 708, + 826, + 779 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "#6: Curating Splits for SFT and RL The last step of our data generation pipeline is to curate two non-overlapped training sets for SFT and RL, respectively. Inspired by Chu et al. (2025) which finds that RL is particularly effective in encouraging deeper reasoning on challenging cases, we aim to select more challenging samples for the RL split. To achieve this, we propose using the presence of self-reflective cues (i.e., the \"aha moments\") in the distilled answers as an indicator of a sample's difficulty level (details are in Appendix A.2). For the SFT split, we exclude samples with \"aha moments\", as such samples may be too complex to fully imitate through finetuning. On the other hand, the harder examples with \"aha moments\" form the RL split, on which reward-driven learning may be better suited to elicit meaningful reflection.", + "bbox": [ + 169, + 784, + 826, + 925 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Following these steps, our dataset adheres to the format {image, question, reasoning, answer}, with reasoning and answer generated by DeepSeek-R1. We construct a high-quality multimodal reasoning dataset with 126,413 samples for SFT and 25,195 samples for RL.", + "bbox": [ + 169, + 103, + 826, + 148 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Investigating The Role of SFT for Multimodal Reasoning", + "text_level": 1, + "bbox": [ + 169, + 176, + 825, + 199 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "SFT has become the de-facto approach for training LLMs. Recent studies aim to extend the strengths of SFT to empower LVLMs with reasoning abilities by training on specially formatted data. Unlike prior methods that incorporate standalone textual descriptions of images (Xu et al., 2024), this direct strategy enables the model to develop grammatically coherent reasoning abilities, allowing it to \"think before speak.\" In recent vision-language reasoning systems, there is a notable trend of complementing or even replacing SFT with RL to enhance complex reasoning abilities (Peng et al., 2025; Deng et al., 2025b). We follow this line and take it further by probing the underlying cause of this shift. Our finding suggests that self-reflection thinking (\"aha moments\") from the SFT process is overloaded with excessive and irrelevant reasoning, becomes what we call \"pseudo aha moments\" and ultimately hurts performance. In this section, we explore 1) the model perform when SFT-ed on data with aha-moments and 2) the effect of SFT data size to model performance.", + "bbox": [ + 169, + 205, + 826, + 375 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Experiment Setup", + "text_level": 1, + "bbox": [ + 171, + 388, + 382, + 407 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To investigate the effect of SFT training with aha-moments, we collect the distilled VQA pairs whose distilled answers contain aha-moments, totaling 55K samples. To study the effect of SFT with different sizes of training sets, we use perplexity (PPL) filtering to obtain a smaller SFT dataset. Specifically, we compute the PPL score of each answer in VLAA-Thinking-SFT-126K using Qwen2-VL-2B and Qwen2.5-VL-3B, and sort all samples by their average PPL scores over the two models. We keep the samples with high PPLs to obtain a total of 25K SFT samples, as these harder examples push models to learn more effectively and efficiently (Ankner et al., 2024; Li et al., 2024b).", + "bbox": [ + 169, + 421, + 826, + 532 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We select four models for training: Qwen2VL (2B and 7B)2, Qwen2.5VL (3B and 7B). Each model is trained with a batch size of 128 and their vision encoder frozen. We evaluate model performance with VLMEvalKit (Duan et al., 2024) on 6 math reasoning benchmarks hosted in Open LMM Reasoning Leaderboard, which contains 6 challenging math reasoning benchmarks including MathVista (Lu et al., 2024), MathVision (Wang et al., 2024b), MathVerse (Zhang et al., 2024), DynaMath (Zou et al., 2024), WeMath (Qiao et al., 2024), LogicVista (Xiao et al., 2024). We present the percentage of relative performance drop of different models in Figure 3. Detailed training and evaluation setup are in Appendix B.", + "bbox": [ + 169, + 541, + 828, + 656 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Findings", + "text_level": 1, + "bbox": [ + 171, + 670, + 302, + 690 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "SFT with Aha Moments Degrades Performance. We present results for the Qwen-2.5-VL-3B model trained under three different settings using our SFT data in Table 2. Somewhat unexpectedly, the model fine-tuned on 55K examples containing the aha moment performs significantly worse than the base model, with an average drop of $10.5\\%$ . This suggests that chasing the aha moment through SFT is unreliable, as SFT merely teaches the model to mimic rather than to generalize genuine self-reflective reasoning. Additionally, the table shows evidence that straightforward SFT using multimodal reasoning data also degrades performance, e.g., we observe an average drop of $10.2\\%$ and $19.1\\%$ when fine-tuning on 25K and 126K samples, respectively.", + "bbox": [ + 169, + 702, + 580, + 898 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/ee231119f8f93c50666fd2cd9ed1b81e1491b70da62821b8ecbdfbada8a0ed75.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelAvg.
Qwen2.5-VL-3B31.8
w/ aha-55K21.3
w/ 25K21.6
w/ 126K12.7
", + "bbox": [ + 611, + 708, + 805, + 797 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 2: Average performance over 6 reasoning benchmarks of Qwen-2.5-VL-3B SFT-ed on different sizes of SFT data and on data containing only examples with aha moment (aha-55K).", + "bbox": [ + 586, + 806, + 826, + 883 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "2In this work, Qwen2VL-2B and Qwen2VL-7B refer to the instruction-tuned versions.", + "bbox": [ + 192, + 907, + 750, + 924 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 176, + 66, + 504, + 204 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 66, + 818, + 204 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg", + "image_caption": [ + "Figure 3: Delta percentage performance change of different models trained with supervised fine-tuning (SFT) only." + ], + "image_footnote": [], + "bbox": [ + 176, + 208, + 504, + 345 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 208, + 818, + 345 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "More SFT Data, Worse Performance. Counterintuitively, even a five-fold increase in the supervised dataset (from 25K to 126K instances) often fails to improve performance and in most cases actually harms it. Models trained with 126K SFT samples suffer a relative performance drop of over average $14\\%$ compared to their 25K-trained counterparts over all model and task settings (e.g., 25K: $32.2\\%$ vs. 126K: $47.0\\%$ ). This degradation is particularly evident on complex datasets such as WeMath and DynaMath, where the relative decrease reaches as high as $97.9\\%$ over Qwen2.5-VL models on average. Even on mid-difficulty benchmarks like MathVision and MathVerse (i.e., model performance is relatively higher), the 126K SFT models underperform, with an average drop of $28.6\\%$ compared to the untrained model over 4 models. These results suggest that simply scaling up SFT data does not boost generalizable reasoning skills of LLMs, and may instead suppress the model's capacity on various reasoning tasks.", + "bbox": [ + 169, + 417, + 826, + 587 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Larger Models Are Not Immune to SFT Degeneration. Contrary to expectations, scaling up model size does not mitigate the adverse effects of excessive SFT, under heavier SFT they exhibit pronounced drops on the most challenging evaluations. A larger 7B models fine-tuned on 126K examples experience drops nearly identical in magnitude to their smaller 2B or 3B counterparts: $47.2\\%$ for smaller models vs. $45.4\\%$ for larger models compared with base models. Notably, despite the strong performance of Qwen2.5-VL-7B model (e.g., $68.1\\%$ on MathVista), it also suffers an average decline of $52.5\\%$ on these reasoning tasks when SFT-ed with 126K data.", + "bbox": [ + 169, + 616, + 826, + 728 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "These findings highlight the limitations of SFT as a tool for enhancing multimodal reasoning. While it may be suitable for learning reasoning formats, it falls short of the expectations for fostering inherent self-reflection. Rather than simply scaling supervision data, our results suggest for a shift toward more advanced training methods like RL.", + "bbox": [ + 169, + 733, + 826, + 792 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Improving Multimodal Reasoning with Mixed Rewards", + "text_level": 1, + "bbox": [ + 169, + 811, + 810, + 834 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The previous section shows that SFT is insufficient to transfer R1's ability to LVLMs on vision-language tasks. Therefore, it is crucial to seek for other post-training methods to elicit the reasoning ability of LVLMs. Since reinforcement learning (RL) is effective in enhancing reasoning ability (Yang et al., 2025a; Kirk et al., 2023), and GRPO has recently been proven more effective and efficient on textual math reasoning task (Shao et al., 2024; Jahn et al.,", + "bbox": [ + 169, + 853, + 826, + 925 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg", + "image_caption": [ + "Figure 4: The proposed Mixed Reward Module for GRPO training, comprising 2 reward formats (rule-based and open-ended) and 5 types of verifiable rewards (digit, MCQ, math, IoU and general reasoning)." + ], + "image_footnote": [], + "bbox": [ + 176, + 104, + 816, + 262 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "2025) than other methods like PPO (Schulman et al., 2017), it motivates us to apply GRPO training for vision-language reasoning tasks.", + "bbox": [ + 169, + 348, + 823, + 378 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Mathematically, let $q$ be a query and $\\{o_i\\}_{i=1}^G$ be a group of $G$ sampled outputs from the old policy model $\\pi_{old}$ , GRPO maximizes the following objective:", + "bbox": [ + 169, + 385, + 823, + 417 + ], + "page_idx": 6 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {J} _ {\\mathrm {G R P O}} (\\theta) = \\mathbb {E} _ {q, \\{o _ {i} \\} \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}}} \\left[ \\frac {1}{G} \\sum_ {i = 1} ^ {G} \\frac {1}{| o _ {i} |} \\sum_ {t = 1} ^ {| o _ {i} |} \\min \\left(r _ {t} (\\theta) \\hat {A} _ {i, t}, \\operatorname {c l i p} (r _ {t} (\\theta), 1 - \\epsilon , 1 + \\epsilon) \\hat {A} _ {i, t}\\right) \\right] - \\beta D _ {\\mathrm {K L}} \\left(\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}\\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 169, + 422, + 818, + 458 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "where $\\hat{A}_{i,t}$ is the estimated advantage, $\\beta$ is the KL penalty coefficient and $\\pi_{\\theta}, \\pi_{\\theta_{\\mathrm{old}}}, \\pi_{\\mathrm{ref}}$ are current, old, and reference policies, respectively.", + "bbox": [ + 169, + 458, + 823, + 489 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 GRPO with Mixed Reward", + "text_level": 1, + "bbox": [ + 169, + 523, + 460, + 539 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To better adapt GRPO to multimodal reasoning, in addition to adopting the rule-based reward similar to the textual GRPO training, it is necessary to consider additional characteristics introduced by the vision modality. Inspired by (Fu et al., 2024) which benchmarks LVLMs by perception and cognition (reasoning), we propose a mixed reward framework for GRPO training, as illustrated in Figure 4. The reward system comprises five types of verifiable rewards with two formats, encompassing both visual perception and visual reasoning tasks.", + "bbox": [ + 169, + 556, + 825, + 655 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Rule-Based Reward There are 4 types of rule-based rewards, including digit matching, option letter matching and math expression matching and Intersection over Union for bounding boxes. For digit matching, the model is asked to answer counting questions from CLEVR-Math whose groundtruths are a single digit. For option letter matching, the model is required to answer an MCQ. For math expression matching, the model is asked to solve a math question, such as finding a function expression or the volume of a cone, and output its answers in latex format. We use the Math Verify3 package to check for correctness. For bounding boxes, the model is prompted to output the bounding box coordinates of an object in the image, and an IoU score (range from 0 to 1) is computed as reward.", + "bbox": [ + 169, + 660, + 825, + 790 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Open-ended Reward We leverage InternLM-XComposer2.5-Reward (Zang et al., 2025) as the scorer, denoted as $S_{\\theta}(\\cdot)$ , which takes an image and a QA pair as input, and outputs a reward score. Following Muhtar et al. (2025), the reward for a sampled response $\\hat{y}$ is computed as $R_{open} = 1 - \\exp(-\\left(S_{\\theta}(\\hat{y}) - S_{\\theta}(y)\\right) \\times \\beta)$ if $f_{\\theta}(\\hat{y}) > f_{\\theta}(y)$ else 0, where $S_{\\theta}(y)$ is the score of the reference answer, and $\\beta$ is a smoothing hyperparameter. Note that the open-ended reward is normalized into [0,1], which is consistent with the scale of rule-based reward, partially avoiding reward hacking during training.", + "bbox": [ + 169, + 795, + 825, + 893 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "3https://github.com/huggingface/Math-Verify", + "bbox": [ + 192, + 907, + 514, + 922 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 946, + 503, + 958 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Implicit Format Reward Unlike Guo et al. (2025) and its subsequent works which use a separate reward term for format correctness, we discard this format reward term and make the format reward supersede all other rewards. Namely, whenever we are unable to extract a valid response from the raw answer, the reward would be 0. We empirically find that by specifying the output format in system prompt, the model is able to generate answers with correct formats through trials and errors. The implicit format reward design simplifies the reward computation. Further, it may yield better performance since less restriction is imposed on the exploration process (Zeng et al., 2025).", + "bbox": [ + 169, + 102, + 826, + 217 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2 Effect of SFT on GRPO Training", + "text_level": 1, + "bbox": [ + 169, + 242, + 509, + 261 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/494e2a32b86da5c55a1e0d1d8d99d6176e929684cf3b7d5cc327bbe413e37432.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
GRPO BackboneMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
Qwen2VL-7B-Inst59.619.833.915.230.536.032.5
Qwen2VL-7B-Inst+SFT43.714.719.03.211.127.319.8(-39%)
Qwen2VL-7B-Base59.318.233.511.423.236.230.7
Qwen2VL-7B-Base+SFT49.516.425.06.420.432.725.7(-16%)
", + "bbox": [ + 179, + 281, + 834, + 371 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3: Benchmark results of models trained with GRPO on different backbones. SFT+GRPO yields performance degradation, indicating that SFT is NOT compatible with GRPO in multimodal reasoning.", + "bbox": [ + 169, + 381, + 823, + 422 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "SFT is NOT Compatible with GRPO in Multimodal Reasoning. Although we reveal in Section 3 that SFT alone leads to a performance drop in multimodal reasoning, it is still unclear whether SFT plays a crucial role in aiding GRPO, like the golden key in DeepSeek-R1. We experiment with different backbones for GRPO training. Specifically, we adopt Qwen2VL-7B-Base and Qwen2VL-7B-Inst, and perform SFT on them with 25K samples, followed by GRPO training.", + "bbox": [ + 169, + 452, + 826, + 539 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "From Table 3, we observe that models undergoing SFT before GRPO training perform worse than those trained with GRPO alone, presenting an average drop of $8.9\\%$ across Qwen2VL-Base and Qwen2VL-Inst compared to their non-SFT counterparts. We also find that SFT introduces more degradation to instruction models than to base models without instruction-following capabilities. For instance, Qwen2VL-Inst suffers a $7.7\\%$ more drop in performance than Qwen2VL-Base post-SFT, suggesting that SFT can compromise the instruction-following ability crucial for effective GRPO training. Taken together, these results suggest that SFT is currently incompatible with GRPO in the context of multimodal reasoning, impairing both base and instruction-tuned LVLMs.", + "bbox": [ + 169, + 542, + 826, + 671 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg", + "image_caption": [ + "Figure 5: Impact of SFT with 5K and 10K samples before GRPO. Smaller-sized SFT datasets still jeopardizes GRPO performance." + ], + "image_footnote": [], + "bbox": [ + 240, + 684, + 754, + 821 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Smaller SFT Dataset Still Jeopardizes GRPO Performance. Since we reveal in Section 3.2 that more SFT data yields lower performance, we try to investigate the effect of downsizing", + "bbox": [ + 169, + 893, + 823, + 926 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "the SFT training set. Following the PPL filtering method in Section 3, we select top-10K and top-5K samples from VLAA-Thinking-SFT-126K to finetune Qwen2.5-VL-3B, followed by GRPO training. For comparison, we also conduct GRPO training without SFT.", + "bbox": [ + 169, + 103, + 823, + 147 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We present the performance of Qwen2.5-VL-3B on each task in Figure 5. A clear observation is that applying SFT on 5K examples prior to GRPO significantly degrades performance compared to using GRPO alone, showing an average drop of $13.5\\%$ . Moreover, scaling up SFT data to 10K yields only a marginal improvement of $0.8\\%$ . These results further support that SFT before GRPO can hinder the model's learning capability.", + "bbox": [ + 169, + 152, + 823, + 224 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg", + "image_caption": [ + "Figure 6: Response length (left) and reward (right) during training. Training with only GRPO yields the lowest response length and yet the highest final reward and best benchmark performance, indicating that response length, reward, and model performance are NOT necessarily related." + ], + "image_footnote": [], + "bbox": [ + 189, + 236, + 496, + 351 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 236, + 803, + 352 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Response Length, Reward, and Model Performance are NOT Necessarily Related. Prior work in RL suggests that longer responses often correlate with better reasoning and higher RL rewards (Guo et al., 2025; Zhou et al., 2025; Chen et al., 2025b). However, our findings in Figure 6 reveal that response length and reward in GRPO are not reliable indicators of reasoning ability. For instance, the 10K SFT+GRPO model produces the longest responses but ends up with lower rewards than the GRPO-only model ( $\\sim 0.35$ vs. $\\sim 0.5$ ) after training. Similarly, the 5K SFT+GRPO variant shows moderate length and reward but still underperforms on downstream tasks.", + "bbox": [ + 169, + 420, + 826, + 532 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Interestingly, both SFT-ed models start with higher initial rewards (e.g., $\\sim 0.20$ for $10\\mathrm{K}$ SFT+GRPO vs. $\\sim 0.05$ for GRPO-only), which is likely due to their early learning experience with supervision since SFT and GRPO data share the same distribution. However, they exhibit limited reward improvement during training, whereas the GRPO-only model rapidly surpasses them. These trends further reveal that SFT solely provides a higher \"lower bound\" for RL training, yet it may lower the \"upper bound\" since the reasoning SFT data constrains the model's exploration paths. Therefore, reasoning is a native emerging ability that is more likely to be developed through RL, not SFT. While SFT-ed models may appear to reason, their behavior is closer to pattern imitation — a form of pseudo-reasoning that lacks the generalizable reasoning skills.", + "bbox": [ + 169, + 539, + 826, + 679 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.3 GRPO Training without SFT", + "text_level": 1, + "bbox": [ + 169, + 705, + 478, + 724 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Following the findings in the previous section, we directly conduct GRPO training which yields four models: VLAA-Thinker-Qwen2-VL-2B, VLAA-Thinker-Qwen2-VL-7B, VLAA-Thinker-Qwen2.5-VL-3B, VLAA-Thinker-Qwen2.5-VL-7B. We also train on a base model of Qwen2-VL-7B, and the resulting model is named VLAA-Thinker-Qwen2-7B-Zero.", + "bbox": [ + 169, + 737, + 826, + 794 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We sample 4 times for each query with temperature 0.8. Rollout and training batch size are set as 512 and 256, respectively. We train our model for 1 episode (outer loop) and 1 epoch per episode (inner loop) on $8^{*}\\mathrm{H}100$ GPUs with 49 steps. More details of training setup are in Appendix C.1. We follow the identical evaluation setup as described in Section 3.1. We present evaluation results in Table 4 and list our main findings below.", + "bbox": [ + 169, + 800, + 823, + 871 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Direct GRPO Training Boosts Model Performance. Models trained directly with GRPO on the VL-Thinking RL consistently outperform their respective base models. For example,", + "bbox": [ + 169, + 895, + 826, + 925 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 948, + 504, + 958 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/c08d20935207087b106a4ad318993bac1745d63af6ef4cd6f9ba0f41a4bfcef2.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
4B-scale LVLMs
Qwen2-VL-2B48.016.117.53.810.826.620.5
Qwen2.5-VL-3B61.221.931.213.222.940.331.8
VLM-R1-Math-030562.721.932.213.030.040.533.4
VLAA-Thinker-Qwen2-2B43.614.819.03.412.630.420.3
VLAA-Thinker-Qwen2.5-3B61.024.436.418.233.838.535.4
7B-scale LVLMs
LLaVA-OneVision-7B58.618.319.39.020.933.326.6
InternLM-XComposer2.564.017.816.28.214.134.725.8
InternVL2.5-8B64.517.022.89.423.536.028.9
InternVL2-8B58.320.020.49.220.233.626.9
Qwen2-VL-7B61.619.225.411.022.333.328.8
Qwen2.5-VL-7B68.125.441.121.836.247.940.1
VLAA-Thinker-Qwen2-7B-Zero59.318.233.511.423.236.230.7
VLAA-Thinker-Qwen2-7B59.619.833.915.230.536.032.5
VLAA-Thinker-Qwen2.5-7B68.026.448.222.441.548.542.5
", + "bbox": [ + 179, + 101, + 821, + 323 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Table 4: Evaluation results of 6 math reasoning benchmarks on Open LMM Leaderboard. VLAA-Thinker models significantly outperform baselines and other models.", + "bbox": [ + 169, + 332, + 826, + 363 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "at the 7B scale, two models trained on VL-Thinking achieve an average score of $36.5\\%$ , marking a $2.0\\%$ improvement over their base model average of $34.5\\%$ . Moreover, our best-performing 7B model consistently outperforms other similarly sized LVLMs (e.g., InternVL2.5-8B, LLaVA-OneVision-7B), while our 3B model surpasses the recent reasoning-focused model, VLM-R1-Math, by $1.1\\%$ on average. These results once again demonstrate that GRPO significantly enhances reasoning capabilities, even without additional SFT.", + "bbox": [ + 169, + 390, + 826, + 476 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Stronger Instruction Model Leads to Better Post-GRPO Reasoning. An interesting observation is that model with better instruction tuning generally performs better. The instruction-aligned Qwen2-7B model, after GRPO, outperforms its unaligned counterpart VLAA-Thinker-Qwen2-7B-Zero by $1.8\\%$ on average $(31.3\\%$ vs. $29.5\\%)$ , with notable gains on harder tasks like DynaMath $(5.0\\%)$ and WeMath $(3.1\\%)$ . Moreover, using a stronger instruction-tuned model for GRPO further improves across both 3B and 7B scales — VLAA-Thinker-Qwen2.5 surpasses VLAA-Thinker-Qwen2 by $12.6\\%$ on average, confirming that higher-quality instruction tuning leads to more effective post-RL reasoning.", + "bbox": [ + 169, + 486, + 826, + 599 + ], + "page_idx": 9 + }, + { + "type": "image", + "img_path": "images/19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg", + "image_caption": [ + "Figure 7: Heatmap of different \"aha\" expressions generated by VLAA-Thinker models during training." + ], + "image_footnote": [], + "bbox": [ + 276, + 599, + 687, + 750 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Emergence of Authentic Aha Moments. To show that our GRPO training can induce authentic self-reflection process, we plot the frequency of four aha expressions (\"alternatively\", \"double-check\", \"i should check\", \"wait\") for each VLAA-Thinker model in Figure 7. Since all models are trained using GRPO without being SFT-ed on distilled reasoning paths, all aha moments emerge from the GRPO process, demonstrating the model's self-developed reflective ability. Another finding is that the number of aha moments is not directly correlate with overall model performance, as more aha moments do not necessarily translate to higher reasoning scores.", + "bbox": [ + 169, + 811, + 826, + 925 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "4.4 Ablations", + "text_level": 1, + "bbox": [ + 171, + 101, + 308, + 118 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/77f89a6f81276be573df3fa7e88e1d19bfb7df7e4db8a9fe4dfd25d29931bcc3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
RowMethodDigitMathMCQIoUOpen-endedMViMVsWM
0Qwen2.5-VL-3B21.931.222.9
1w/o Digit23.534.628.8
2w/o Math21.432.727.0
3w/o MCQ21.533.918.4
4w/o IoU22.835.330.0
5All Rule-Based22.234.930.1
6Mixed Reward24.436.433.8
", + "bbox": [ + 181, + 147, + 823, + 287 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Mixed Reward. To demonstrate the effectiveness of our mixed reward strategy, we perform an ablation study on Qwen2.5-VL-3B by selectively disabling individual reward components and evaluating performance across three math reasoning benchmarks, as shown in Table 5. The model trained with Mixed Reward achieves the best overall performance, with an average improvement of $6.2\\%$ over the baseline, demonstrating the effectiveness of our reward design. Using only rule-based rewards (All Rule-Based) also yields consistent gains (e.g., $29.1\\%$ vs. $25.3\\%$ baseline), while removing specific components—especially MCQ (w/o MCQ) leads to substantial drops. These results highlight the critical role of rule-based rewards in GRPO for multimodal reasoning tasks.", + "bbox": [ + 169, + 357, + 826, + 484 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Hyperparameters To search for better hyperparameters, we experiment with different learning rates (LR) and KL divergence settings on Qwen2.5-VL-3B. We start with a basic setting where LR anneals to zero following a cosine scheduler with no KL constraint. Results are shown in Table 6. LR1 uses a minimum learning rate of $8e^{-7}$ with warmup ratio $10\\%$ , whereas LR2 uses a minimum learning rate of $5e^{-7}$ with warmup ratio $3\\%$ . Since LR2 performs slightly better than LR1, we compare two KL settings on top of LR2. KL1 uses an initial KL of $1e^{-2}$ and a target KL of $5e^{-3}$ , whereas KL2 uses an initial KL coefficient of $1e^{-3}$ and a target KL of $5e^{-4}$ . We find that introducing KL constraints significantly improves the performance on MathVerse and DynaMath by $1.1\\%$ and", + "bbox": [ + 169, + 512, + 580, + 715 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "$3.2\\%$ , respectively, and that using a smaller KL can encourage the model to evolve.", + "bbox": [ + 169, + 715, + 769, + 732 + ], + "page_idx": 10 + }, + { + "type": "table", + "img_path": "images/85e2de31cb56e15b9925e1ed8bad0a7db2adbbbbd97433446a6f6d123f2f4fb1.jpg", + "table_caption": [ + "Table 5: Ablation of Mixed Reward on MVi: MathVision, MVs: MathVerse and WM: WeMath. A combination of rule-based and open-ended rewards yields significant boost in performance." + ], + "table_footnote": [ + "Table 6: Ablation on LR and KL Coef. on MVs: MathVerse, DM: DynaMath and LV: LogicVista." + ], + "table_body": "
SettingsMVsDMLV
Basic31.715.038.5
Learning Rate
+ LR133.016.038.1
+ LR233.515.638.3
KL Coef.
+ KL134.418.837.8
+ KL235.818.639.2
", + "bbox": [ + 596, + 513, + 816, + 643 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "4.5 Case Study", + "text_level": 1, + "bbox": [ + 171, + 761, + 321, + 780 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "We provide an example showcasing the improvement of VLAA-Thinker over the original model in Appendix C.3. Qwen2.5VL-7B generates lengthy response with wrong reasoning traces. Although it outputs some self-reflective patterns like \"re-evaluate\", the final answer remains wrong. On the other hand, VLAA-Thinker-Qwen2.5VL-7B is able to reason on the right track, with only a minor mistake near the end of its thinking process. Nevertheless, the high-level idea and reasoning process is overall correct, demonstrating strong capability of solving complex reasoning tasks.", + "bbox": [ + 169, + 797, + 826, + 897 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "5 Related Work", + "text_level": 1, + "bbox": [ + 171, + 99, + 359, + 118 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Vision-Language Reasoning Models. Recent advances in vision-language (VL) reasoning models build on the success of text-only reasoning systems like OpenAI's o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). Earlier VL methods, such as few-shot prompting and chain-of-thought (CoT), offered limited visual reasoning (Brown et al., 2020; Wei et al., 2022). Recently, LLaVA-CoT (Xu et al., 2024) adopts an SFT approach a 4-step structured outputs to enhance model's reasoning, yet lacking flexibility due to its rigid output format. More recently, newer models incorporate more natural reasoning traces and reinforcement learning. VLM-R1 (Shen et al., 2025) and R1-V (Chen et al., 2025a) align multimodal LLMs using step-by-step reasoning and policy optimization. VisualThinker-R1-Zero (Zhou et al., 2025) goes further by training a 2B model via pure RL from scratch, achieving emergent inner reasoning. LMM-R1 (Peng et al., 2025) transfers CoT skills from language to vision through staged RL. Vision-R1 (Huang et al., 2025) combines reasoning trace supervision and RL with correctness and format rewards to train a strong 7B VL reasoner. Different from these concurrent works, we propose a high-quality multimodal reasoning dataset with R1-like reasoning traces for both SFT and RL, and provide a comprehensive study on training paradigms.", + "bbox": [ + 169, + 143, + 826, + 369 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Reward Modeling in Reinforcement Learning. Reward design plays a central role in reasoning-oriented RL. While model-based rewards offer flexibility (Kwon et al., 2023; Wang et al., 2024a; Gao et al., 2024), they are prone to reward hacking (Eisenstein et al., 2023; Chen et al., 2024b; Fu et al., 2025), making them risky for reasoning tasks. Recent VL models prefer binary correctness rewards (Huang et al., 2025; Zhou et al., 2025) for math or QA tasks, directly reinforcing accurate outputs. Others apply rule-based rewards, enforcing structured formats or logic chains (Liu et al., 2025; Deng et al., 2025a). While recent studies deploy strong reward models for enhancing LVLM reasoning, they are grounded by specific domains or simpler tasks (Muhtar et al., 2025; Tu et al., 2025). GRPO-style methods use relative ranking within output batches to guide optimization without value critics (Shao et al., 2024; Guo et al., 2025). Our Mix Reward objective combines the model-based and rule-based reward in four complex rewarding scenarios, yielding better performance than existing approaches.", + "bbox": [ + 169, + 375, + 826, + 556 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 171, + 616, + 333, + 635 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "This work provides a comparative analysis on the effectiveness of leveraging SFT or RL (more specifically, GRPO) to build LVLM with strong reasoning ability. We show by extensive experiments that distilling reasoning data and performing SFT is a deficient way to transfer reasoning ability across modalities. We then extend our dataset to GRPO training with a proposed mixed reward objective, which yields substantial improvement over the baseline models. We present several findings regarding combining SFT and GRPO and the correlation between reward, respond length, and final performance. These results indicate that reasoning is a native emerging ability acquired from RL, rather than SFT, which merely equips the model with 'pseudo-reasoning' ability.", + "bbox": [ + 169, + 661, + 826, + 789 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Acknowledgement", + "text_level": 1, + "bbox": [ + 171, + 848, + 380, + 869 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "We thank the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs.", + "bbox": [ + 169, + 893, + 823, + 925 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 99, + 294, + 118 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L Leavitt, and Mansheej Paul. Perplexed by perplexity: Perplexity-based data pruning with small reference models. arXiv preprint arXiv:2405.20541, 2024.", + "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 2020.", + "Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for lite vision-language models. arXiv preprint arXiv:2402.11684, 2024a.", + "Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025a. Accessed: 2025-02-02.", + "Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319, 2024b.", + "Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, et al. An empirical study on eliciting and improving r1-like reasoning models. arXiv preprint arXiv:2503.04548, 2025b.", + "Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161, 2025.", + "Huilin Deng, Ding Zou, Rui Ma, Hongchen Luo, Yang Cao, and Yu Kang. Boosting the generalization and reasoning of vision language models with curriculum reinforcement learning. arXiv preprint arXiv:2503.07065, 2025a.", + "Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352, 2025b.", + "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 11198-11201, 2024.", + "Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D'Amour, DJ Dvi-jotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244, 2023.", + "Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. URL https://arxiv.org/abs/2306.13394.", + "Jiayi Fu, Xuandong Zhao, Chengyuan Yao, Heng Wang, Qi Han, and Yanghua Xiao. Reward shaping to mitigate reward hacking in rlhf. arXiv preprint arXiv:2502.18770, 2025.", + "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370, 2023.", + "Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, and Yi Wu. On designing effective rl reward at training time for llm reasoning. arXiv preprint arXiv:2410.15115, 2024." + ], + "bbox": [ + 171, + 128, + 826, + 924 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025.", + "Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608-3617, 2018.", + "Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143, 2024.", + "Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749, 2025.", + "Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024.", + "Afrar Jahin, Arif Hassan Zidan, Yu Bao, Shizhe Liang, Tianming Liu, and Wei Zhang. Unveiling the mathematical reasoning in deepseek models: A comparative study of large language models. arXiv preprint arXiv:2503.10573, 2025.", + "Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901-2910, 2017.", + "Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the effects of rlhf on llm generalisation and diversity. arXiv preprint arXiv:2310.06452, 2023.", + "Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023.", + "Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024a.", + "Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. arXiv preprint arXiv:2402.00530, 2024b.", + "Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358, 2022.", + "Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025.", + "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR), 2024.", + "Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2200-2209, 2021." + ], + "bbox": [ + 171, + 102, + 826, + 925 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Dilxat Muhtar, Enzhuo Zhang, Zhenshi Li, Feng Gu, Yanglangxing He, Pengfeng Xiao, and Xueliang Zhang. Quality-driven curation of remote sensing vision-language data via learned scoring models. arXiv preprint arXiv:2503.00743, 2025.", + "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536, 2025.", + "Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284, 2024.", + "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.", + "Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024.", + "Haozhan Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. https://github.com/om-ai-lab/VLM-R1, 2025. Accessed: 2025-02-15.", + "Haoqin Tu, Weitao Feng, Hardy Chen, Hui Liu, Xianfeng Tang, and Cihang Xie. Vilbench: A suite for vision-language process reward modeling. arXiv preprint arXiv:2503.20271, 2025.", + "Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080, 2024a.", + "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id=QWTCxMpPA.", + "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022.", + "Yijia Xiao, Edward Sun, Tianyu Liu, and Wei Wang. Logicvista: Multimodal llm logical reasoning benchmark in visual contexts. arXiv preprint arXiv:2407.04973, 2024.", + "Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024. URL https://arxiv.org/abs/2411.10440.", + "Haoyan Yang, Ting Hua, Shangqian Gao, Binfeng Xu, Zheng Tang, Jie Xu, Hongxia Jin, and Vijay Srinivasan. Dynamic noise preference optimization for llm self-improvement via synthetic data. arXiv preprint arXiv:2502.05400, 2025a.", + "Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615, 2025b.", + "Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, et al. Internlm-xcomposer2. 5-reward: A simple yet effective multi-modal reward model. arXiv preprint arXiv:2501.12368, 2025.", + "Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892, 2025." + ], + "bbox": [ + 171, + 102, + 826, + 924 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision, pp. 169-186. Springer, 2024.", + "Hengguang Zhou, Xinui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero's\" aha moment\" in visual reasoning on a 2b non-sft model. arXiv preprint arXiv:2503.05132, 2025.", + "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640, 2024.", + "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836, 2024." + ], + "bbox": [ + 174, + 102, + 823, + 315 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A Data Generation", + "text_level": 1, + "bbox": [ + 171, + 99, + 397, + 119 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.1 Prompt", + "text_level": 1, + "bbox": [ + 171, + 140, + 294, + 159 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We show the prompts for captioning (Figure 8), R1 answer distillation (Figure 9), rewriting (Figure 10) and verification (Figure 11).", + "bbox": [ + 169, + 172, + 823, + 202 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Prompt for Captioning", + "text_level": 1, + "bbox": [ + 142, + 215, + 295, + 233 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "You are a vision-language model generating a highly detailed caption of an image.", + "Summarize the environment or setting (indoor/outdoor, surroundings).", + "Describe visible objects, people, or structures (colors, shapes, textures, positions).", + "Transcribe all text verbatim. For equations, use LaTeX when appropriate but do not solve or interpret them.", + "If structured data (tables, charts) appears, use Markdown formatting for clarity.", + "Include labels, annotations, brand names, or logos, if any, otherwise don't mention them.", + "Note any visible expressions or emotional tone factually, without speculation.", + "## Maintain a logical order: from overall context to finer details.", + "## Provide only the caption without extra context or commentary.", + "## Be unbiased and faithful in your description, using natural language and Markdown only where relevant." + ], + "bbox": [ + 133, + 241, + 857, + 369 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Figure 8: Prompt for captioning with GPT-4-Turbo.", + "bbox": [ + 310, + 397, + 683, + 414 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Prompt for Distillation", + "text_level": 1, + "bbox": [ + 142, + 434, + 292, + 450 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "You have advanced visual perception abilities and can directly analyze images as if you are looking at them. You will be provided with detailed visual descriptions, but you should interpret them as if they represent your actual visual understanding rather than text-based captions.", + "bbox": [ + 133, + 459, + 861, + 500 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Answer questions as if you are visually perceiving the scene, not reading a caption. Provide natural and confident responses about objects, relationships, and numerical or spatial reasoning. Use a descriptive, visually grounded tone, avoiding mention of text.", + "bbox": [ + 133, + 508, + 861, + 550 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Never mention that you are reading text or captions. Infer spatial relationships, numerical properties, and logical conclusions based on the perceived \"image.\" If information is unclear, respond naturally as if there are visual limitations (e.g., 'It appears that...').", + "bbox": [ + 133, + 559, + 861, + 601 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Caption: {caption}", + "bbox": [ + 135, + 609, + 200, + 638 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Question: {question}", + "bbox": [ + 135, + 648, + 210, + 676 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Figure 9: Prompt for distillation with Deepseek-R1.", + "bbox": [ + 310, + 713, + 684, + 729 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.2 Aha-Moment Filtering", + "text_level": 1, + "bbox": [ + 171, + 761, + 429, + 780 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We use the following list of keywords to identify aha moments: wait, again, double-check, hmm, mistake, alternatively, check, i should confirm. All answers are matched with the logic: has_aha = any([aha in text.lower() for aha in ahas]).", + "bbox": [ + 169, + 792, + 826, + 838 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.3 Sample Demonstration for VLAA-Thinking-SFT-126K", + "text_level": 1, + "bbox": [ + 171, + 863, + 684, + 880 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "We show several examples from VLAA-Thinking-SFT-126K in Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18.", + "bbox": [ + 169, + 893, + 826, + 926 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Prompt for Rewriting", + "text_level": 1, + "bbox": [ + 142, + 104, + 285, + 119 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "You will receive a snippet of text that references a \"description\" or \"caption\" of an image. Your task is to produce a **nearly identical** version of that text with **minimal** changes, focusing on the following:", + "bbox": [ + 133, + 128, + 859, + 157 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. **Replace references to \"description\", \"caption\" and \"rationale\"* with wording that references *** the image.\"**", + "- For example, \"The description says...\" could become \"The image shows...\"", + "- \"The caption suggests...\" could become \"The image suggests...\"", + "- \"Based on the rationale...\" could become \"Based on the image...\"", + "- Make sure the replacement sounds natural but does $^{**}$ not\\*\\* otherwise change the meaning." + ], + "bbox": [ + 132, + 166, + 859, + 232 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2. **Preserve all line breaks, punctuation, and spacing** as much as possible, and make **no additional edits** outside of these replacements.", + "3. You should only output the rewritten content." + ], + "bbox": [ + 132, + 241, + 859, + 294 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Here is the input: {input}", + "bbox": [ + 135, + 305, + 251, + 333 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Figure 10: Prompt for answer rewriting with GPT-4-Turbo.", + "bbox": [ + 284, + 359, + 710, + 378 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Prompt for Verification", + "text_level": 1, + "bbox": [ + 142, + 395, + 294, + 411 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "You are a fair evaluator.", + "bbox": [ + 133, + 420, + 292, + 433 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "You will be given a groundtruth and an answer from a model.", + "bbox": [ + 133, + 433, + 539, + 446 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "If the answer aligns with the groundtruth, output \"Yes\". Otherwise, output \"No\".", + "bbox": [ + 133, + 446, + 666, + 459 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Your output should only be \"Yes\" or \"No\".", + "bbox": [ + 133, + 459, + 416, + 472 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "groundtruth: {gold}", + "bbox": [ + 135, + 483, + 223, + 510 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "answer: {pred}", + "bbox": [ + 135, + 523, + 189, + 547 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Figure 11: Prompt for verification with GPT-3.5-Turbo.", + "bbox": [ + 299, + 575, + 694, + 593 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B Details of SFT Experiments", + "text_level": 1, + "bbox": [ + 171, + 616, + 509, + 637 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B.1 Training", + "text_level": 1, + "bbox": [ + 171, + 657, + 302, + 676 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "To enhance the instruction following ability, we append task-specific instructions (i.e., MCQ, short answer) to questions. The system prompt shown in Figure 12 is used. We use a global batch size of 128. Models are trained for 190 steps on 25K samples and 985 steps on 126K samples. All experiments are run on $8^{*}\\mathrm{H}100$ GPUs.", + "bbox": [ + 169, + 690, + 826, + 748 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Interestingly, we observe loss spikes for 25K SFT training on Qwen2-VL-7B which causes model collapse. Therefore, we run the settings for multiple times until we obtain a normal loss curve, and use that checkpoint for evaluation.", + "bbox": [ + 169, + 753, + 825, + 797 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "You are VL-Thinking, a helpful assistant with excellent reasoning ability. A user asks you a question, and you should try to solve it. You should first think about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within and tags, respectively, i.e., reasoning process here answer here .", + "bbox": [ + 210, + 815, + 782, + 883 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Figure 12: System Prompt used for training and evaluation.", + "bbox": [ + 281, + 895, + 712, + 912 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B.2 Evaluation", + "text_level": 1, + "bbox": [ + 171, + 101, + 320, + 117 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We adopt VLMEvalKit (Duan et al., 2024) for all evaluation experiments. We set use(custom_prompt to False following the settings of most models in the toolkit. For higher efficiency, we set maxPixels to $256^{*}32^{*}32$ , and max_new_tokens to 800. We also set system prompt as the one we used for training for a consistent training-test behavior. The other hyperparameters are default to the original toolkit.", + "bbox": [ + 169, + 133, + 823, + 204 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We specify the split of datasets and metrics reported:", + "bbox": [ + 169, + 209, + 555, + 226 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. MathVista: The Test Mini split of MathVista dataset; overall accuracy.", + "2. MathVision: The Full test set of MathVision; overall accuracy.", + "3. MathVerse: The Test Mini split of MathVerse; accuracy of \"Vision Only\".", + "4. DynaMath: The Full test set of DynaMath; overall accuracy.", + "5. WeMath: The Test Mini split of WeMath; \"Score (Strict)\".", + "6. LogicVista: The Full test set of LogicVista; overall accuracy." + ], + "bbox": [ + 207, + 237, + 756, + 347 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "C Details of GRPO Experiments", + "text_level": 1, + "bbox": [ + 169, + 376, + 537, + 398 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "C.1 Training", + "text_level": 1, + "bbox": [ + 169, + 417, + 303, + 435 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We adapt our code from OpenRLHF framework (Hu et al., 2024). To suit for our need of deploying a reward model on the same machine, we offload the reward model to CPU and only move it to GPU when performing rollouts and scoring. This design saves valuable GPU memory which accelerates the training process.", + "bbox": [ + 169, + 449, + 823, + 507 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We also perform dataset-specific inspection and find some issues for several datasets. For example, although ArxivQA contains only MCQ, the answer format includes \"A\", \"A)\", \"(a)\", etc. And in the synthesis subset of Math PUMA, we find that some solutions only contain the value of solved unknown variables when the questions ask to output the entire function expression. We fix these issues by rule-based filtering and GPT-assisted rewriting, aiming to improve the quality of the VL-Thinking dataset.", + "bbox": [ + 169, + 512, + 826, + 597 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "C.2 Evaluation", + "text_level": 1, + "bbox": [ + 169, + 625, + 321, + 640 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We evaluate our models with an identical setting described in Appendix B.2.", + "bbox": [ + 169, + 656, + 723, + 672 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "C.3 Case Study", + "text_level": 1, + "bbox": [ + 169, + 699, + 326, + 717 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We present a case demonstrating the improvement of VLAA-Thinker-Qwen2.5VL-7B over its backbone in Figure 13.", + "bbox": [ + 169, + 729, + 823, + 761 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 32, + 346, + 47 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg", + "image_caption": [ + "Figure 13: A case from MathVerse testmini (sample index 20). Markdowns are rendered for illustration purpose. Wrong reasoning paths are colored red." + ], + "image_footnote": [ + "As shown in the figure, the angle $O$ to circle $A$ at the center of triangle $B$ , $AB$ , and $AC$ . \nChoices: A: $36^{\\circ}$ ; B: $54^{\\circ}$ ; C: $60^{\\circ}$ ; D: $27^{\\circ}$ ." + ], + "bbox": [ + 269, + 223, + 377, + 292 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Question in Image", + "text_level": 1, + "bbox": [ + 553, + 217, + 661, + 229 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "As shown in the figure, if angle $A = 36.0$ , then angle $C$ is equal to () \nChoices: A: $36^{\\circ}$ B: $54^{\\circ}$ C: $60^{\\circ}$ D: $27^{\\circ}$", + "bbox": [ + 426, + 232, + 779, + 255 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Qwen2.5VL-7B", + "text_level": 1, + "bbox": [ + 454, + 294, + 542, + 305 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To solve the problem, we need to use some properties of circles and tangents.", + "bbox": [ + 217, + 305, + 584, + 316 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "1. Identify the given information:", + "text_level": 1, + "bbox": [ + 217, + 325, + 380, + 335 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- AB is tangent to circle O at point B.", + "- The angle $\\angle AOB = 36^{\\circ}$" + ], + "bbox": [ + 223, + 337, + 393, + 356 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "2. Understand the properties:", + "text_level": 1, + "bbox": [ + 217, + 357, + 367, + 367 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Since $AB$ is tangent to the circle at $B$ , the radius $OB$ is perpendicular to the tangent $AB$ . Therefore, $\\angle OBA = 90$ .", + "-In triangle $OAB$ , the sum of the angles is $180^{\\circ}$ . So, we can find $\\angle AOB$ .." + ], + "bbox": [ + 223, + 367, + 772, + 387 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$\\angle AOB + \\angle OBA + \\angle OAB = 180^{\\circ}$", + "bbox": [ + 415, + 388, + 581, + 397 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$36^{\\circ} + 90^{\\circ} + \\angle OAB = 180^{\\circ}$", + "bbox": [ + 431, + 398, + 563, + 406 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$\\angle OAB = 54^{\\circ}$", + "bbox": [ + 464, + 407, + 532, + 416 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "3. Determine the measure of $\\angle C$", + "text_level": 1, + "bbox": [ + 217, + 417, + 380, + 426 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "- Since $\\angle OAB$ is an inscribed angle that intercepts the arc BC, the measure of the central angle $\\angle BOC$ that", + "bbox": [ + 217, + 428, + 741, + 439 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "intercepts the same arc BC is twice the measure of the inscribed angle $\\angle OAB$ :", + "bbox": [ + 217, + 439, + 598, + 448 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n\\angle B O C = 2 \\times \\angle O A B = 2 \\times 5 4 ^ {\\circ} = 1 0 8 ^ {\\circ}\n$$\n", + "text_format": "latex", + "bbox": [ + 395, + 449, + 601, + 457 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "- Now, in triangle OBC, the sum of the angles is $180^{\\circ}$ . So, we can find $\\angle C$ :", + "bbox": [ + 217, + 459, + 571, + 468 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$\\angle OBC + \\angle OCB + \\angle BOC = 180^{\\circ}$", + "bbox": [ + 419, + 469, + 588, + 478 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$90^{\\circ} + \\angle OCB + 108^{\\circ} = 180^{\\circ}$", + "bbox": [ + 429, + 479, + 565, + 487 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "$\\angle OCB = 180^{\\circ} - 198^{\\circ} = 18^{\\circ}$", + "bbox": [ + 429, + 489, + 566, + 497 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "However, since $\\angle OCB$ is not directly given as an option, we need to re-evaluate the problem with the co", + "bbox": [ + 215, + 500, + 707, + 508 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "interpretation of the angle $\\angle C$ which is actually the angle subtended by the arc BC at the circumference, which is half the central angle:", + "bbox": [ + 217, + 508, + 776, + 529 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n\\angle C = \\frac {1}{2} \\times \\angle B O C = \\frac {1}{2} \\times 1 0 8 ^ {\\circ} = 5 4 ^ {\\circ}\n$$\n", + "text_format": "latex", + "bbox": [ + 411, + 529, + 594, + 545 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Therefore, the measure of $\\angle C$ is $54^{\\circ}$ .", + "bbox": [ + 217, + 542, + 398, + 553 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The answer is: B", + "bbox": [ + 217, + 554, + 297, + 563 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "VLAA-Thinker-Qwen2.5VL-7B", + "text_level": 1, + "bbox": [ + 411, + 579, + 583, + 590 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "", + "text_level": 1, + "bbox": [ + 227, + 598, + 267, + 604 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To solve this problem, we need to use the properties of tangents and circles. Since AB is tangent to circle O at point B, the radius OB is perpendicular to the tangent AB at point B. This means that angle OBA is $90^{\\circ}$ .", + "bbox": [ + 225, + 607, + 767, + 628 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "that angle A is $36^{\\circ}$ , we can find angle AOB by subtracting angle A from $180^{\\circ}$ (since the sum of angles in triangle AOB must be $180^{\\circ}$ ):", + "bbox": [ + 225, + 637, + 767, + 657 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n\\angle A O B = 1 8 0 ^ {\\circ} - 3 6 ^ {\\circ} - 9 0 ^ {\\circ} = 5 4 ^ {\\circ}\n$$\n", + "text_format": "latex", + "bbox": [ + 431, + 657, + 571, + 667 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Since angle AOB is a central angle subtending arc BC, and angle C is an angle subtending the same arc BC, the measure of angle C is half the measure of angle AOB:", + "bbox": [ + 225, + 667, + 750, + 688 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n\\angle C = \\frac {1}{2} \\times 5 4 ^ {\\circ} = 2 7 ^ {\\circ}\n$$\n", + "text_format": "latex", + "bbox": [ + 455, + 689, + 547, + 704 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Therefore, the correct answer is D: $27^{\\circ}$", + "bbox": [ + 227, + 703, + 411, + 712 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "", + "text_level": 1, + "bbox": [ + 227, + 714, + 271, + 722 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "", + "text_level": 1, + "bbox": [ + 227, + 724, + 277, + 732 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "D:27°", + "text_level": 1, + "bbox": [ + 228, + 734, + 259, + 742 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "", + "text_level": 1, + "bbox": [ + 228, + 744, + 282, + 753 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg", + "image_caption": [ + "GeoQA", + "A VL-Thinking sample from GeoQA170K." + ], + "image_footnote": [ + "Question: Given a semicircular figure with the diameter represented by AB and intersecting chords AD and BC at point P, where angle DPB measures 60.0 degrees, calculate the value of tan(angle ADC) under the condition that D denotes the midpoint of the arc BC. Choices: A $\\frac{1}{2}$ , B $\\frac{1}{3}$ , C $\\frac{\\sqrt{3}}{3}$ , D $\\sqrt{3}$ . \n $\\bullet$ Original Answer: C" + ], + "bbox": [ + 388, + 114, + 614, + 160 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "VL-Thinking", + "text_level": 1, + "bbox": [ + 388, + 210, + 434, + 220 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Okay. So I'm trying to solve this geometry problem. Let me see if I can visualize it correctly. There's a semicircle with diameter AB. Points C and D are on the curved edge, and chords AC and BD intersect at point P. Below AB, there's the center O. The angle DPB is 60 degrees, and D is the midpoint of arc BC. I need to find the value of tan $(\\angle A.D.C)$ .", + "bbox": [ + 388, + 227, + 602, + 250 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "First, I should probably draw this out to get a better sense. I imagine a semicircle with $AB$ as the diameter. $O$ is the center, so $AO = OB$ . Points C and D are on the semicircle, with $D$ being the midpoint of arc BC. That means $BD$ is equal to arc DC. Since it's a semicircle, the total arc from $B$ to $C$ is 180 degrees, so each arc is equal to one angle. The sum of these two angles is $2\\pi$ , so $AB$ is equal to $CD$ . This is the exact measure of the semicircle. So if BC is some arc, $D$ is halfway along it. But it isn't sure about the exact measure yet.", + "bbox": [ + 388, + 253, + 604, + 279 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Chords $\\mathrm{AC}$ and $\\mathrm{DB}$ intersect at $P$ , and angle DPB is 60 degrees. I remember that when two chords intersect, the products of the segments are equal. So $A'P = PC = B'P$ . But I'm not sure if that's directly useful.", + "bbox": [ + 388, + 282, + 604, + 299 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Since $D$ is the midpoint of arc BC, it might have some special properties. In a semicircle, the midpoint of an arc would be equidistant from B and C. Also, angles subtended by the same arc are equal. Maybe I can use some properties of cyclic quadrilaterals here, but since it's a semicircle, all points lie on the circumference, so ABCD is a cyclic quadrilateral.", + "bbox": [ + 388, + 303, + 604, + 324 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Wait, but AICD is a quadrilateral inscribed in a semicircle, which makes it a cyclic quadrilateral. In such cases, opposite angles sum to 180 degrees. But I'm not sure if that's applicable here because we're dealing with a circle.", + "bbox": [ + 388, + 328, + 599, + 343 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Let me think about the angles. Angle DPB is 60 degrees. Since $\\mathrm{P}$ is the intersection of chords AC and BD, maybe I can use the property that the angle between two chords is equal to half the sum of the measures of the intercepted arcs. So angle DPB would be half the sum of arcs DB and AC. But I'm not sure about the exact arcs involved.", + "bbox": [ + 388, + 347, + 602, + 368 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Alternatively, maybe I can use triangle properties. Since $\\mathbb{E}$ is the midpoint of arc $BC$ , triangle $DBC$ might be a triangle in the sense that it has a right triangle and a left triangle in the sense of a semicircle as a right triangle. So angle $BDC$ to degree $90^{\\circ}$", + "bbox": [ + 388, + 372, + 604, + 388 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Wait, that's an important point. Since D is on the semicircle, triangle BDC is right-angled at D. So angle BDC is 90 degrees. That might help in finding angle ADC.", + "bbox": [ + 388, + 392, + 596, + 402 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "1. need to find $\\tan (\\angle DC)$ . Let's denote angle ADC as $\\theta$ . So $\\tan (\\theta)^{\\circ}$ opposite/adjacent. To find this, I might need to find the lengths of the sides of triangle ADC or use some trigonometric identities.", + "bbox": [ + 388, + 407, + 604, + 419 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Let me consider the coordinates approach. Let's place the semicircle on a coordinate system with center $O$ at $(0,0)$ , and $AB$ as the diameter along the $x$ -axis from $(-r,0)$ to $(r,0)$ , where $r$ is the radius. Then point $D$ , being the midpoint of arc BC, would be somewhere on the semicircle. But without knowing the exact position of $C$ , this might be tricky.", + "bbox": [ + 388, + 422, + 604, + 444 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Alternatively, maybe I can use power of a point. Since $P$ is the intersection of chords $\\mathrm{AC}$ and $\\mathrm{BD}$ , the power of point $P$ with respect to the circle is equal for both chords. So $PA:PC = PB:PD$ . But that without specific choice of $P$ would be a contradiction.", + "bbox": [ + 388, + 448, + 604, + 463 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Wait, maybe I can assign some variables. Let's let the radius of the semicircle be $\\mathrm{r}$ . Then $\\mathrm{AB} = 2\\pi$ . Let me assume $r = 1$ for simplicity, so $\\mathrm{AB} = 2\\pi$ . Then $\\mathrm{O}$ is at $(0,0)$ , $\\mathrm{A}$ is at $(-1,0)$ , $\\mathrm{B}$ is at $(1,0)$ . Let me place point C somewhere on the semicircle, say at an angle $\\theta$ from the $x$ -axis. Then point D, being the midpoint of arc BC, would be at an angle $\\frac{\\theta}{2} + 180^{\\circ}$ from the $x$ -axis, but I'm not sure.", + "bbox": [ + 388, + 468, + 604, + 491 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Alternatively, since $D$ is the midpoint of arc BC, the arc BOD equals DC. So if I consider the central angles, angle BOD equals angle DOC. Since the total arc BC is some measure, say 2σ, then each arc BOD and its corresponding arc BOD are equal.", + "bbox": [ + 388, + 498, + 606, + 513 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Wait, maybe I can use the fact that angle DPB is 60 degrees. Using the intersecting chords theorem, angle DPB is equal to half the sum of the measures of arcs DB and AC. So angle DPB = $\\frac{1}{2} (\\angle ABC + \\angle AC) = 60$ degrees. Therefore, any $\\angle DB + \\angle AC = 170^{\\circ}$ degrees.", + "bbox": [ + 388, + 518, + 606, + 537 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "But I'm not sure about the measure of arc AC. Alternatively, maybe I can express arc AC in terms of other arcs. Since the total semicircle is 180 degrees, Arc AB is 180 degrees. Arc AC is part of that, so arc AB is a part of the semicircle.", + "bbox": [ + 388, + 542, + 599, + 558 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Alternatively, since D is the midpoint of arc BC, $\\mathrm{BC} = \\mathrm{BD}$ are $\\mathrm{AC} = \\mathrm{DC}$ . Let's denote arc $B D = a\\angle DC = x$ . Then $a\\angle B C = 2x$ . Since the total semicircle is 180 degrees, are AB = 180 degrees, so are AC = 180 degrees.", + "bbox": [ + 388, + 563, + 606, + 579 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Now, from the intersecting chords theorem, angle $\\mathrm{DPB} = \\frac{1}{2} (\\arctan DB + \\arctan AC) = 60$ degrees. So $\\frac{1}{2} (x + (180 - 2x)) = 60$ . Simplifying, $\\frac{1}{2} (180 - x) = 60$ . Multiply both sides by $2 \\div 180 - x = 120$ . Therefore, $x = 60$ degrees.", + "bbox": [ + 388, + 583, + 589, + 607 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "So are BD = arc DC = 60 degrees, and arc BC = 120 degrees. Then arc AC = 180 - 120 = 60 degrees. Now, I need to find tangent angle ADC. Let's consider triangle ADC. Since D is on the semicircle, angle ADC is an inscribed angle. The measure of angle ADC is half the measure of arc AC. Since arc AC is 60 degrees, angle ADC = 30 degrees.", + "bbox": [ + 388, + 609, + 606, + 630 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Therefore, $\\tan (\\text{angle ADC}) = \\tan (30\\text{ degrees}) = \\frac{1}{\\sqrt{3}} = \\frac{\\sqrt{3}}{3}$ . \nWait, but let me double-check. If angle ADC is 30 degrees, then $\\tan (30)$ is indeed $\\frac{\\sqrt{3}}{3}$ . So the answer should be option C: $\\frac{\\sqrt{3}}{3}$ .", + "bbox": [ + 388, + 633, + 604, + 667 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "The value of tan(angle ADC) is $\\frac{\\sqrt{3}}{3}$", + "bbox": [ + 388, + 674, + 465, + 684 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Step-by-Step Explanation:", + "text_level": 1, + "bbox": [ + 388, + 689, + 449, + 695 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "1. Understanding the Problem:", + "text_level": 1, + "bbox": [ + 388, + 699, + 457, + 705 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "- We have a semicircle with diameter AB and center O.\n- Points C and D are on the semicircle, with D being the midpoint of arc BC.\n- Chords AC and BD intersect at point P, forming an angle DPB of 60 degrees.\n- We need to find tan $(\\angle ACD)$ .", + "bbox": [ + 388, + 709, + 550, + 729 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "2. Key Properties and Theorems: - Intersecting Chords Theorem: The measure of an angle formed by two intersecting chords is half the intersection area.", + "bbox": [ + 388, + 734, + 602, + 750 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "- Midpoint of Arc: Since D is the midpoint of arc BC, arcs BD and DC are equal. \n- Inscribed Angle Theorem: An angle inscribed in a circle is half the measure of its intercepted arc.", + "bbox": [ + 388, + 750, + 591, + 758 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "3. Applying the Intersecting Chords Theorem: \n- Angle DPB is formed by chords BD and AC, so \\(\\angle DPB = \\frac{1}{2}\\angle (AB + arc AC)\\) \n- Given \\(\\angle DPB = 60^{\\circ}\\), we have \n\\[ 60^{\\circ} = \\frac{1}{2} (\\arccos DB + \\arccos AC) \\] \n\\(\\arccos DB + \\arccos AC = 120^{\\circ} \\]", + "bbox": [ + 388, + 763, + 519, + 803 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "4. Determining Are Measures: Since D is the midpoint of arc BC, arcs BD and DC are equal. Let each be x. Thus, are BC = 2x. The total semicircle is $180^{\\circ}$ , so: arc AC = 180° - arc BC = 180° - 2x. Substituting into the earlier equation: arc AC = 180° - arc BC = 180° - 2x. Therefore, arcs BD and DC are each 60° and arc AC is 60°.", + "bbox": [ + 388, + 808, + 549, + 849 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "5. Finding Angle ADC: \n- Angle ADC is an inscribed angle intercepting arc AC. \n- By the Inscribed Angle Theorem: \n $\\angle ADC = \\frac{\\pi}{2} \\times \\text{arc AC} = \\frac{1}{2} \\times 60^{\\circ} = 30^{\\circ}$", + "bbox": [ + 388, + 854, + 504, + 878 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "6. Calculating the Tangent $\\tan (30^{\\circ}) = \\frac{\\sqrt{3}}{4}$ Answer $\\frac{\\sqrt{3}}{4}$ (Option C)", + "bbox": [ + 388, + 883, + 447, + 912 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Synthesis", + "text_level": 1, + "bbox": [ + 455, + 132, + 532, + 147 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Input Image", + "text_level": 1, + "bbox": [ + 312, + 184, + 375, + 196 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg", + "image_caption": [ + "Figure 15: A VL-Thinking sample from Math PUMA (subset Synthesis)." + ], + "image_footnote": [], + "bbox": [ + 433, + 154, + 549, + 223 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "- Question: What is the function's equation?", + "bbox": [ + 305, + 233, + 475, + 243 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "- Original Answer: Step 1: Substitute the point $(\\frac{\\pi}{10},5)$ into the function: $5 = a\\cdot \\sin (-5\\cdot \\frac{\\pi}{10})$ . Step 2: Simplify the equation: $5 = a\\cdot \\sin (\\frac{-\\pi}{2})$ . Step 3: Since $\\sin (\\frac{-\\pi}{2}) = -1$ , we have: $5 = -a$ . Step 4: Solve for $a:a = -5$ Step 5: Substitute the value of a back into the original function: $y = -5\\cdot \\sin (-5\\cdot x)$ . Answer: $y = -5\\cdot \\sin (-5\\cdot x)$", + "bbox": [ + 305, + 243, + 700, + 295 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "VL-Thinking", + "text_level": 1, + "bbox": [ + 313, + 304, + 390, + 315 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": " First, I recognize that the function given is $y = a\\cdot \\sin (-5x)$ , and it passes through the point $\\left(\\frac{\\pi}{10},5\\right)$", + "bbox": [ + 308, + 321, + 651, + 349 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "To find the value of $a$ , I'll substitute the coordinates of the point into the equation.", + "bbox": [ + 310, + 349, + 584, + 359 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Substituting $x = \\frac{\\pi}{10}$ and $y = 5$ , the equation becomes: $5 = a \\cdot \\sin \\left(-5 \\cdot \\frac{\\pi}{10}\\right)$", + "bbox": [ + 310, + 366, + 496, + 393 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Simplifying the argument of the sine function:", + "bbox": [ + 310, + 402, + 467, + 412 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 412, + 366, + 426 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "So, the equation now is:", + "bbox": [ + 310, + 436, + 393, + 444 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n5 = a \\cdot \\sin \\left(\\frac {\\pi}{2}\\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 446, + 374, + 460 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "I know that $\\sin \\left(-\\frac{\\pi}{2}\\right) = -1$ so substituting this value in: $5 = a\\cdot (-1)$", + "bbox": [ + 310, + 470, + 506, + 489 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Solving for $a$ :", + "bbox": [ + 310, + 498, + 357, + 506 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n5 = a \\cdot (- 1)\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 507, + 354, + 515 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Finally, substituting $a = -5$ back into the original function gives:", + "bbox": [ + 310, + 525, + 532, + 534 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\ny = - 5 \\cdot \\sin (- 5 x)\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 534, + 375, + 541 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n< / t h i n k >\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 542, + 343, + 550 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "To determine the equation of the function $y = a \\cdot \\sin(-5x)$ that passes through the point $\\left( \\frac{\\pi}{10}, 5 \\right)$ , follow these steps:", + "bbox": [ + 310, + 558, + 679, + 575 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "1. Substitute the Point into the Equation:", + "bbox": [ + 310, + 585, + 457, + 593 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Plug $x = \\frac{\\pi}{10}$ and $y = 5$ into the equation:", + "bbox": [ + 316, + 593, + 459, + 603 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n5 = a \\cdot \\sin \\left(- 5 \\cdot \\frac {\\pi}{1 0}\\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 604, + 395, + 619 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "2. Simplify the Argument of the Sine Function:", + "bbox": [ + 310, + 630, + 480, + 638 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 638, + 372, + 652 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "So, the equation becomes:", + "bbox": [ + 316, + 652, + 406, + 661 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n5 = a \\cdot \\sin \\left(- \\frac {\\pi}{2}\\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 662, + 380, + 679 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "3. Evaluate the Sine Function:", + "bbox": [ + 310, + 689, + 419, + 696 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n\\sin \\left(- \\frac {\\pi}{2}\\right) = - 1\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 699, + 377, + 715 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Substituting this back into the equation:", + "bbox": [ + 316, + 717, + 450, + 724 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\n5 = a \\cdot (- 1)\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 726, + 359, + 734 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "4. Solve for $a$ :", + "bbox": [ + 310, + 743, + 361, + 751 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\na = - 5\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 753, + 346, + 758 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "5. Write the Final Equation:", + "bbox": [ + 310, + 768, + 413, + 777 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Substitute $a = -5$ back into the original function:", + "bbox": [ + 316, + 777, + 488, + 786 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\ny = - 5 \\cdot \\sin (- 5 x)\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 787, + 382, + 795 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "6. Simplify the Negative Sign (Optional):", + "bbox": [ + 310, + 804, + 457, + 811 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Since $\\sin (-\\theta) = -\\sin (\\theta)$ , the equation can also be written as:", + "bbox": [ + 316, + 813, + 527, + 821 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\ny = 5 \\cdot \\sin (5 x)\n$$\n", + "text_format": "latex", + "bbox": [ + 316, + 821, + 369, + 829 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Final Answer:", + "bbox": [ + 310, + 839, + 362, + 845 + ], + "page_idx": 21 + }, + { + "type": "equation", + "text": "\n$$\ny = - 5 \\sin (- 5 x)\n$$\n", + "text_format": "latex", + "bbox": [ + 310, + 847, + 372, + 856 + ], + "page_idx": 21 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "CLEVR-MATH", + "text_level": 1, + "bbox": [ + 424, + 176, + 599, + 196 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Input Image", + "text_level": 1, + "bbox": [ + 202, + 267, + 307, + 284 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 406, + 213, + 638, + 334 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Question: Subtract all balls. How many objects are left?", + "- Original Answer: 3" + ], + "bbox": [ + 187, + 351, + 565, + 393 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "VL-Thinking", + "text_level": 1, + "bbox": [ + 202, + 411, + 334, + 431 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 435, + 527, + 458 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": " Okay, let's see. The user is asking to subtract all balls and determine how many objects are left. First, I need to figure out which objects are considered \"balls\" in the image. The image shows a large blue metal sphere. A sphere is a three-dimensional circle, which is essentially a ball shape. So the blue metal sphere is definitely a ball.", + "bbox": [ + 205, + 459, + 816, + 517 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Next, the surrounding objects are two small red metal cylinders and a single small brown metal cylinder in the foreground. Cylinders are not balls; they have a different shape—circular bases and straight sides. So those would not be subtracted. The question says to subtract all balls, so only the sphere (ball) needs to be removed.", + "bbox": [ + 205, + 532, + 821, + 589 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Originally, there's the sphere (1 object), two red cylinders (2 objects), and one brown cylinder (1 object). Total objects before subtraction: $1 + 2 + 1 = 4$ . Subtracting the one ball leaves 3 objects: the two red cylinders and the brown cylinder.", + "bbox": [ + 205, + 604, + 810, + 648 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Wait, but I should make sure there are no other balls mentioned. The image only refers to the sphere as a ball-like object. The cylinders are clearly not balls. So subtracting the sphere leaves three cylinders. The answer should be 3.", + "bbox": [ + 205, + 664, + 808, + 705 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "", + "bbox": [ + 207, + 709, + 261, + 720 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "In the scene, there is one large blue sphere (a ball-shaped object) centrally positioned. The remaining objects consist of two small red cylinders flanking it symmetrically and a single brown cylinder in the foreground. Removing the sphere leaves only the three cylindrical objects visible.", + "bbox": [ + 205, + 737, + 831, + 781 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Answer: There are 3 objects remaining.", + "bbox": [ + 205, + 796, + 439, + 810 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Figure 16: A VL-Thinking sample from CLEVR-Math.", + "bbox": [ + 300, + 835, + 692, + 852 + ], + "page_idx": 22 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "ArxivQA", + "text_level": 1, + "bbox": [ + 457, + 133, + 571, + 156 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Input Image", + "text_level": 1, + "bbox": [ + 200, + 224, + 307, + 243 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg", + "image_caption": [ + "Figure 17: A VL-Thinking sample from ArxivQA." + ], + "image_footnote": [], + "bbox": [ + 387, + 166, + 504, + 229 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 167, + 630, + 232 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 387, + 229, + 504, + 297 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 506, + 231, + 630, + 296 + ], + "page_idx": 23 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Question: In figure (b), how does the repetition rate affect the ratio of 9th to 11th harmonic intensity for the $1030\\mathrm{nm}$ wavelength? Choose from the following options A) The ratio increases as the repetition rate increases. B) The ratio decreases as the repetition rate increases. C) The ratio remains constant as the repetition rate increases. D) The ratio fluctuates unpredictably with changes in repetition rate.", + "- Original Answer: B" + ], + "bbox": [ + 184, + 309, + 836, + 412 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "VL-Thinking", + "text_level": 1, + "bbox": [ + 200, + 431, + 334, + 452 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "", + "bbox": [ + 205, + 465, + 256, + 477 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 455, + 529, + 479 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Okay, so I'm looking at figure (b) here, and I need to figure out how the repetition rate affects the ratio of 9th to 11th harmonic intensity for the $1030\\mathrm{nm}$ wavelength. Let me start by understanding what's in the image.", + "bbox": [ + 204, + 479, + 839, + 508 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "First, the image is a scatter plot with the x-axis being the repetition rate in kHz and the y-axis being the ratio of 9th to 7th and 11th harmonic intensity on a logarithmic scale. There are two sets of data points: red dots and blue squares. The red dots are probably representing the $1030\\mathrm{nm}$ wavelength since in panel (a), the red curve was labeled as $1030\\mathrm{nm}$ . The blue squares would then correspond to the $515\\mathrm{nm}$ wavelength.", + "bbox": [ + 202, + 523, + 836, + 583 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The question specifically asks about the $1030\\mathrm{nm}$ wavelength, so I'll focus on the red dots. Looking at the scatter plot, as the repetition rate increases from left to right on the x-axis, I need to see what happens to the y-axis values for the red dots.", + "bbox": [ + 202, + 598, + 833, + 642 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "From what I can see, as the repetition rate goes up, the red dots trend downward. That means the ratio of 9th to 11th harmonic intensity is decreasing. There's no indication of an increase or a constant ratio; instead, it's a clear downward trend. The red dots don't seem to fluctuate unpredictably either; they follow a decreasing pattern as repetition rate increases.", + "bbox": [ + 202, + 657, + 843, + 717 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "So, putting it all together, the ratio decreases as the repetition rate increases for the $1030\\mathrm{nm}$ wavelength. That would correspond to option B.", + "bbox": [ + 204, + 732, + 815, + 761 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "", + "bbox": [ + 205, + 762, + 261, + 775 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The image shows that the ratio of 9th to 11th harmonic intensity for the $1030\\mathrm{nm}$ wavelength decreases as the repetition rate increases.", + "bbox": [ + 202, + 791, + 823, + 821 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Answer: B) The ratio decreases as the repetition rate increases.", + "bbox": [ + 204, + 835, + 598, + 849 + ], + "page_idx": 23 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ALLaVA- LAION", + "text_level": 1, + "bbox": [ + 398, + 132, + 578, + 152 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Input Image", + "text_level": 1, + "bbox": [ + 202, + 215, + 299, + 232 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg", + "image_caption": [ + "Figure 18: A VL-Thinking sample from ALLaVA-LAION." + ], + "image_footnote": [], + "bbox": [ + 372, + 164, + 602, + 280 + ], + "page_idx": 24 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Question: Considering the proximity of Preston Store to other landmarks, what could be its significance in the local area?", + "- Original Answer: The Preston Store's significance in the local area, as suggested by the map, could be quite central given its prominent placement and the fact that it has a dedicated Facebook page marker. It appears to be situated near major roads and commercial entities like KFC, Harvey Norman, and Dan Murphy's, which indicates it is likely a well-known establishment, possibly a retail store or a social venue, that serves as a local landmark or gathering place. The specific type of store cannot be determined from the map alone, but its central location among various retailers and close to a park suggests it may cater to a wide range of customers and possibly holds a key place in the local community." + ], + "bbox": [ + 189, + 292, + 812, + 421 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "VL-Thinking", + "text_level": 1, + "bbox": [ + 202, + 441, + 323, + 460 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "", + "bbox": [ + 212, + 473, + 259, + 484 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 473, + 465, + 503, + 486 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Okay, let's see. The user is asking about the significance of Preston Store in the local area based on its proximity to other landmarks. First, I need to visualize the map shown. The Preston Store is labeled prominently with a red marker and a Facebook 'f' logo, so it's probably a key location.", + "bbox": [ + 210, + 487, + 751, + 527 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Looking at the nearby landmarks, there's KFC, Harvey Norman, Dan Murphy's, and Max Kirwan Mazda. Those are all big-name retailers and services. Then there's Target, Myer Northland, and Anaconda Preston. These are major stores, which suggests the area is a commercial hub. The store being near these places means it's likely in a busy shopping district, making it a central spot for locals.", + "bbox": [ + 210, + 541, + 776, + 595 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Also, there's TW Blake Park and Darebin Creek Forest Park nearby. Green areas mean that Preston Store might be a place people visit before or after outdoor activities. Maybe it's a convenience store or a community spot where people gather. The mix of retail, dining, and parks indicates it's a multifunctional area, so Preston Store could serve both daily shopping needs and act as a meeting point. The Facebook marker suggests they have an online presence, maybe promoting local events or services, enhancing community engagement. Since it's near major roads like Bell St and Murray Rd, accessibility is good, increasing its importance as a central landmark.", + "bbox": [ + 210, + 608, + 769, + 702 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "", + "bbox": [ + 212, + 704, + 263, + 714 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Preston Store appears centrally positioned within a bustling commercial and recreational hub, making it a likely focal point for local activity. Its prominent placement near major retailers like Target, Myer Northland, and Harvey Norman suggests it could serve as a practical anchor for shoppers—perhaps offering essentials or services that complement these larger stores. The proximity to fast food (KFC) and leisure destinations (Darebin Creek Forest Park, TW Blake Park) positions it as a potential pitstop for families or individuals transitioning between errands, dining, and outdoor activities. The red Facebook marker implies community engagement, possibly acting as an information node or local business with social media visibility. Its accessibility via intersecting roads like Bell St and Murray Rd further underscores its role as a convenient landmark in the area's daily flow.", + "bbox": [ + 210, + 729, + 789, + 851 + ], + "page_idx": 24 + }, + { + "type": "header", + "text": "Preprint. Under review.", + "bbox": [ + 173, + 32, + 346, + 47 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 24 + } +] \ No newline at end of file diff --git a/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_model.json b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66150796b5edfe049307972a23bc457679a39541 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_model.json @@ -0,0 +1,5145 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.266, + 0.061, + 0.708 + ], + "angle": 270, + "content": "arXiv:2504.11468v1 [cs.CL] 10 Apr 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.099, + 0.783, + 0.143 + ], + "angle": 0, + "content": "SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models" + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.166, + 0.79, + 0.203 + ], + "angle": 0, + "content": "Hardy Chen\\(^{2*}\\), Haoqin Tu\\(^{1*}\\), Fali Wang\\(^{3}\\), Hui Liu\\(^{4}\\), Xianfeng Tang\\(^{4}\\), Xinya Du\\(^{2}\\), Yuyin Zhou\\(^{1}\\), Cihang Xie\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.204, + 0.695, + 0.221 + ], + "angle": 0, + "content": "1 University of California, Santa Cruz 2 University of Texas at Dallas" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.221, + 0.609, + 0.237 + ], + "angle": 0, + "content": "3 The Pennsylvania State University 4 Amazon Research" + }, + { + "type": "list", + "bbox": [ + 0.182, + 0.204, + 0.695, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.228, + 0.249, + 0.652, + 0.266 + ], + "angle": 0, + "content": "Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/" + }, + { + "type": "text", + "bbox": [ + 0.231, + 0.267, + 0.754, + 0.281 + ], + "angle": 0, + "content": "7B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B" + }, + { + "type": "text", + "bbox": [ + 0.231, + 0.283, + 0.754, + 0.296 + ], + "angle": 0, + "content": "3B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B" + }, + { + "type": "text", + "bbox": [ + 0.231, + 0.298, + 0.717, + 0.312 + ], + "angle": 0, + "content": "Dataset: https://huggingface.co/datasets/UCSC-VLAA/VLAA-Thinkin" + }, + { + "type": "list", + "bbox": [ + 0.228, + 0.249, + 0.754, + 0.312 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.347, + 0.54, + 0.364 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.381, + 0.77, + 0.705 + ], + "angle": 0, + "content": "This work revisits the dominant supervised fine-tuning (SFT) then reinforcement learning (RL) paradigm for training Large Vision-Language Models (LVLMs), and reveals a key finding: SFT can significantly undermine subsequent RL by inducing \"pseudo reasoning paths\" imitated from expert models. While these paths may resemble the native reasoning paths of RL models, they often involve prolonged, hesitant, less informative steps, and incorrect reasoning. To systematically study this effect, we introduce VLAA-Thinking, a new multimodal dataset designed to support reasoning in LVLMs. Constructed via a six-step pipeline involving captioning, reasoning distillation, answer rewrite and verification, VLAA-Thinkings comprises high-quality, step-by-step visual reasoning traces for SFT, along with a more challenging RL split from the same data source. Using this dataset, we conduct extensive experiments comparing SFT, RL and their combinations. Results show that while SFT helps models learn reasoning formats, it often locks aligned models into imitative, rigid reasoning modes that impede further learning. In contrast, building on the Group Relative Policy Optimization (GRPO) with a novel mixed reward module integrating both perception and cognition signals, our RL approach fosters more genuine, adaptive reasoning behavior. Notably, our model VLAA-Thinker, based on Qwen2.5VL 3B, achieves top-1 performance on Open LMM Reasoning Leaderboard1 among 4B scale LVLMs, surpassing the previous state-of-the-art by \\(1.8\\%\\). We hope our findings provide valuable insights in developing reasoning-capable LVLMs and can inform future research in this area." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.745, + 0.348, + 0.763 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.784, + 0.828, + 0.884 + ], + "angle": 0, + "content": "Large Language Models (LLMs) with strong reasoning capability have recently gained wide attention with the emergence of OpenAI's o1/o3 and Deepseek-R1 (Guo et al., 2025; Jaech et al., 2024). A common practice to empower models with reasoning abilities comprises two steps: supervised fine-tuning (SFT) on reasoning data, followed by reinforcement learning (RL) to further boost performance. This successful paradigm has inspired efforts to extend these strengths beyond textual domains to Large Vision-Language Models (LVLMs) (Peng et al., 2025; Chen et al., 2025a; Deng et al., 2025b; Shen et al., 2025; Yang et al., 2025b)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.195, + 0.897, + 0.332, + 0.911 + ], + "angle": 0, + "content": "*Equal contribution." + }, + { + "type": "page_footnote", + "bbox": [ + 0.195, + 0.911, + 0.729, + 0.924 + ], + "angle": 0, + "content": "1https://huggingface.co/spaces/opencompass/Open_LMM_Reasoning_Leaderboard" + }, + { + "type": "list", + "bbox": [ + 0.195, + 0.897, + 0.729, + 0.924 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.078, + 0.825, + 0.263 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.277, + 0.828, + 0.354 + ], + "angle": 0, + "content": "Figure 1: Examples from LVLMs trained with different strategies for reasoning Left: response from a model trained with SFT, showing pseudo reasoning traces and a number of pseudo self-reflective cues (i.e., aha-moments) imitated from R1. Right: response from a model trained with RL, showing native reasoning ability and authentic aha-moments emerged from RL training. Wrong reasoning steps are colored red and aha-moments are highlighted." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.382, + 0.828, + 0.621 + ], + "angle": 0, + "content": "In this work, we take a step further and examine whether the widely adopted \"SFT then RL\" paradigm similarly benefits the development of reasoning-capable LVLMs. Specifically, we ask: 1) What are the distinct effects of SFT and RL in multimodal reasoning? and 2) Is this two-stage paradigm truly necessary for reasoning in LVLMs? To systematically explore these questions, we curate VLAA-Thinkinq, the first comprehensive and high-quality image-text reasoning dataset explicitly designed to support both SFT and RL. Unlike prior datasets, VLAA-Thinkinq includes detailed, step-by-step reasoning traces derived from the R1-style \"think-then-speak\" intermediate reasoning. We construct a dedicated SFT split featuring multimodal chain-of-thought (CoT) examples suitable for visual instruction tuning, alongside a more challenging RL split curated from the same source encourage deeper and more adaptive reasoning behaviors. To effectively transfer reasoning capabilities from text-only models to the multimodal domain, we construct our dataset through a six-stage pipeline: metadata collection, image captioning, R1-based distillation, answer rewriting, verification, and split curation. Specifically, we input image captions and visual questions into DeepSeek-R1 to generate initial reasoning traces. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier, resulting in high-quality multimodal reasoning dataset for SFT and RL." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.625, + 0.828, + 0.807 + ], + "angle": 0, + "content": "Next, we carefully ablate the role of SFT, RL and their combinations in multimodal reasoning using our VLAA-Thinking dataset. To better understand the role of SFT, we perform a detailed analysis, systematically examining the impact of SFT data type (e.g., with and without the self-reflective \"aha moments\"), dataset scale, and model capacity. To explore the potential of RL in the vision-language context, we design a novel mixed reward function within the Group Relative Policy Optimization (GRPO) (Shao et al., 2024) framework that involves both perception and cognition rewards to incentivize the model to produce well-reasoned answers. Specifically, our mixed reward signal blends 2 types of reward with 5 types of functions. For rule-based questions, there are functions for digit, multiple-choice, math and bounding box outputs. For open-ended questions, we adopt a competent reward model, XComposer-2.5-RM (Zang et al., 2025), along with a reference-based reward method to score an answer. We then closely investigate the effects of different reward functions, base models, and the interaction between SFT and GRPO to further optimize reasoning capabilities." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.812, + 0.828, + 0.926 + ], + "angle": 0, + "content": "Our extensive experiments comparing SFT and RL reveal several noteworthy insights. First, we probe the contribution of SFT and RL in multimodal reasoning: while SFT improves performance on standard tasks over the base model, it falls short in enhancing complex reasoning. Merely imitating an expert's thinking through SFT often induces \"pseudo reasoning paths\", a superficial reasoning pattern which may contain \"pseudo aha moments\" (superficial self-reflective cues), as illustrated in Figure 1. We show that these imitated reasoning patterns can hinder genuine reasoning advancement, i.e., \\(47\\%\\) relative performance drop on 7B models. This observation is also in line with recent studies highlighting the need for" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.048 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.077, + 0.825, + 0.248 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.258, + 0.825, + 0.318 + ], + "angle": 0, + "content": "Figure 2: Data generation pipeline. We first generate initial reasoning traces by feeding detailed captions and visual questions into DeepSeek-R1. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier. The resulting data is split into VLAA-Thinking-SFT and VLAA-Thinking-RL." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.343, + 0.825, + 0.401 + ], + "angle": 0, + "content": "feedback and exploration signals to drive advanced reasoning behaviors (Peng et al., 2025). Additionally, our ablations show that for rule-based rewards, math and multiple-choice are more beneficial than others, and that a combination of both rule-based and open-ended rewards yields the best performance." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.405, + 0.827, + 0.56 + ], + "angle": 0, + "content": "While prior work suggests that SFT followed by RL in LVLMs offers the best of both worlds (Guo et al., 2025; Yang et al., 2025b; Deng et al., 2025b)—first mimicking good reasoning format, then refining via RL feedback, we find that applying SFT before GRPO hurts performance on aligned models, with an average \\(12.7\\%\\) drop, and even a smaller scale SFT leads to a similar decline. Regarding model size, larger models cannot immune from the degeneration brought by SFT, as 7B models share almost the same performance drop with their smaller counterparts. Finally, examining the training procedure, we observe little correlation between response length, reward, and performance—SFT-ed models get higher initial rewards and longer response yet underperform RL-trained ones, contrasting with the previous observation that better models usually produce longer answers with higher RL reward (Guo et al., 2025; Peng et al., 2025)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.565, + 0.827, + 0.664 + ], + "angle": 0, + "content": "To summarize, while SFT helps unaligned models follow instructions, it limits exploration during RL by promoting imitative reasoning. In contrast, learning directly from reward signals yields more effective and adaptable thinking behavior. Empirically, direct RL proves superior. Our model, VLAA-Thinker-Qwen2.5VL-3B, achieves the top-1 performance on the Open LMM Reasoning Leaderboard among 4B-scale LVLMs, surpassing the previous state-of-the-art by \\(1.8\\%\\). Our case study further emphasizes these gains with more concise, effective reasoning traces presented in model answers." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.695, + 0.5, + 0.714 + ], + "angle": 0, + "content": "2 The VLAA-Thinking Dataset" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.734, + 0.827, + 0.834 + ], + "angle": 0, + "content": "To systematically evaluate the \"SFT then RL\" paradigm for developing reasoning capabilities in LVLMs, we construct VLAA-Thinking, a dataset that consists of two parts: 1) VLAA-Thinking-SFT which captures step-by-step reasoning grounded in visual inputs for SFT, and 2) VLAA-Thinking-RL which contains challenging samples designed specifically for RL. Our data generation pipeline is designed to transfer reasoning capabilities from a powerful text-only model to the multimodal domain through a structured, multi-stage process. The entire pipeline, as illustrated in Figure 2, consists of six key components:" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.84, + 0.828, + 0.926 + ], + "angle": 0, + "content": "#1: Metadata Collection We collect metadata from 9 vision-language datasets featuring either closed- or open-ended questions. Specifically, we sample data containing unique images from CLEVR-Math (Lindström & Abraham, 2022), Math PUMA (Zhuang et al., 2024), ArxivQA (Li et al., 2024a), DocVQA (Mathew et al., 2021), VizWiz (Gurari et al., 2018), and ALLaVA (Chen et al., 2024a), and process them through our complete data pipeline. In addition, we directly adopt COCO and VisualGenome data from LLaVA-CoT (Xu et al.," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.504, + 0.96 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.048 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "table", + "bbox": [ + 0.177, + 0.077, + 0.821, + 0.284 + ], + "angle": 0, + "content": "
NameData Type#Ori.#Pipeline#Final SFT#Final RL
Collected from Distilling R1
CLEVR-MathClosed-end35,00028,0185,9232,000
GeoQA170KClosed-end---6,499
Math PUMAClosed-end30,00026,67219,2586,696
ArxivQAClosed-end54,39951,34834,6041,000
DocVQAClosed-end10,1948,2064,8971,000
VizWizClosed-end20,5236,5284,2661,000
ALLaVA-LAIONOpen-end47,06618,12310,4963,000
Collected from LLaVA-CoT
COCOClosed-end3,0003,0008,7272,000
VisualGenomeClosed-end3,0003,00038,2422,000
TotalClosed- & Open-end203,182144,895126,41325,195
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.293, + 0.825, + 0.351 + ], + "angle": 0, + "content": "Table 1: Data statistics of VLAA-Thinking. We present the original volume of metadata (#Ori.), the data size after the distillation pipeline (#Pipeline), the size of sampled examples for SFT (#Final SFT) and RL (#Final RL), respectively. Note that we only use GeoQA170K with verifiable answers for the RL split." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.374, + 0.825, + 0.403 + ], + "angle": 0, + "content": "2024). An exception is GeoQA170K (Gao et al., 2023), which we include only in the RL split due to persistent hallucination issues during captioning. Detailed statistics are in Table 1." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.41, + 0.827, + 0.537 + ], + "angle": 0, + "content": "#2: Visual Input and Additional Information Each sample begins with an image, question, and its corresponding answer. To bridge the gap between the visual modality and language reasoning, we resort to GPT-4o to generate a detailed image caption describing the content in structured and semantically rich language (detailed prompts in Appendix A.1). During this process, we take full advantage of the provided knowledge in the data beyond just the GPT captions. In detail, we provide these dataset-specific information: (1) CLEVR-Math: Instructions for synthesizing the image from CLEVR (Johnson et al., 2017); (2) Math PUMA: Textual description of math problems in the image from the dataset itself. (3) ALLaVA-LAION: Fine-grained and verified GPT-4V captions from the original dataset." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.541, + 0.827, + 0.613 + ], + "angle": 0, + "content": "#3: Reasoning Answer Distillation We utilize a strong text-only reasoning model: DeepSeek-R1 to generate thinking rationale and final answers. The model is provided with the image caption, the visual question, and additional information from certain datasets. It responds using a structured reasoning format that is between and tags and contains a sequence of logical steps leading to the final answer." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.618, + 0.827, + 0.703 + ], + "angle": 0, + "content": "#4: Answer and Rewriting To enhance consistency and eliminate modality-specific artifacts, the raw reasoning answers generated by R1 are passed through a rewriting module (i.e., GPT-3.5-turbo (Brown et al., 2020) in our experiment). This module removes unnecessary phrases (e.g., references to \"caption\"), and ensures the answer adheres to a clean, instruction-following format based on the image. We further filter out samples with the sentence length gap larger than 15 words to ensure minimum modifications in this process." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.709, + 0.827, + 0.78 + ], + "angle": 0, + "content": "#5: Automated Verification To assess whether the generated reasoning answers is correct regarding the groundtruth answer, we implement an automated verifier. This verifier compares the rewritten reasoning answer to the groundtruth of the visual question, determining whether the outputs are correct or incorrect. Only the examples that are verified as correct are retained as the final training data." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.785, + 0.827, + 0.926 + ], + "angle": 0, + "content": "#6: Curating Splits for SFT and RL The last step of our data generation pipeline is to curate two non-overlapped training sets for SFT and RL, respectively. Inspired by Chu et al. (2025) which finds that RL is particularly effective in encouraging deeper reasoning on challenging cases, we aim to select more challenging samples for the RL split. To achieve this, we propose using the presence of self-reflective cues (i.e., the \"aha moments\") in the distilled answers as an indicator of a sample's difficulty level (details are in Appendix A.2). For the SFT split, we exclude samples with \"aha moments\", as such samples may be too complex to fully imitate through finetuning. On the other hand, the harder examples with \"aha moments\" form the RL split, on which reward-driven learning may be better suited to elicit meaningful reflection." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.828, + 0.149 + ], + "angle": 0, + "content": "Following these steps, our dataset adheres to the format {image, question, reasoning, answer}, with reasoning and answer generated by DeepSeek-R1. We construct a high-quality multimodal reasoning dataset with 126,413 samples for SFT and 25,195 samples for RL." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.178, + 0.826, + 0.2 + ], + "angle": 0, + "content": "3 Investigating The Role of SFT for Multimodal Reasoning" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.206, + 0.827, + 0.375 + ], + "angle": 0, + "content": "SFT has become the de-facto approach for training LLMs. Recent studies aim to extend the strengths of SFT to empower LVLMs with reasoning abilities by training on specially formatted data. Unlike prior methods that incorporate standalone textual descriptions of images (Xu et al., 2024), this direct strategy enables the model to develop grammatically coherent reasoning abilities, allowing it to \"think before speak.\" In recent vision-language reasoning systems, there is a notable trend of complementing or even replacing SFT with RL to enhance complex reasoning abilities (Peng et al., 2025; Deng et al., 2025b). We follow this line and take it further by probing the underlying cause of this shift. Our finding suggests that self-reflection thinking (\"aha moments\") from the SFT process is overloaded with excessive and irrelevant reasoning, becomes what we call \"pseudo aha moments\" and ultimately hurts performance. In this section, we explore 1) the model perform when SFT-ed on data with aha-moments and 2) the effect of SFT data size to model performance." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.39, + 0.383, + 0.409 + ], + "angle": 0, + "content": "3.1 Experiment Setup" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.422, + 0.828, + 0.534 + ], + "angle": 0, + "content": "To investigate the effect of SFT training with aha-moments, we collect the distilled VQA pairs whose distilled answers contain aha-moments, totaling 55K samples. To study the effect of SFT with different sizes of training sets, we use perplexity (PPL) filtering to obtain a smaller SFT dataset. Specifically, we compute the PPL score of each answer in VLAA-Thinking-SFT-126K using Qwen2-VL-2B and Qwen2.5-VL-3B, and sort all samples by their average PPL scores over the two models. We keep the samples with high PPLs to obtain a total of 25K SFT samples, as these harder examples push models to learn more effectively and efficiently (Ankner et al., 2024; Li et al., 2024b)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.542, + 0.829, + 0.657 + ], + "angle": 0, + "content": "We select four models for training: Qwen2VL (2B and 7B)2, Qwen2.5VL (3B and 7B). Each model is trained with a batch size of 128 and their vision encoder frozen. We evaluate model performance with VLMEvalKit (Duan et al., 2024) on 6 math reasoning benchmarks hosted in Open LMM Reasoning Leaderboard, which contains 6 challenging math reasoning benchmarks including MathVista (Lu et al., 2024), MathVision (Wang et al., 2024b), MathVerse (Zhang et al., 2024), DynaMath (Zou et al., 2024), WeMath (Qiao et al., 2024), LogicVista (Xiao et al., 2024). We present the percentage of relative performance drop of different models in Figure 3. Detailed training and evaluation setup are in Appendix B." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.671, + 0.303, + 0.691 + ], + "angle": 0, + "content": "3.2 Findings" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.703, + 0.581, + 0.9 + ], + "angle": 0, + "content": "SFT with Aha Moments Degrades Performance. We present results for the Qwen-2.5-VL-3B model trained under three different settings using our SFT data in Table 2. Somewhat unexpectedly, the model fine-tuned on 55K examples containing the aha moment performs significantly worse than the base model, with an average drop of \\(10.5\\%\\). This suggests that chasing the aha moment through SFT is unreliable, as SFT merely teaches the model to mimic rather than to generalize genuine self-reflective reasoning. Additionally, the table shows evidence that straightforward SFT using multimodal reasoning data also degrades performance, e.g., we observe an average drop of \\(10.2\\%\\) and \\(19.1\\%\\) when fine-tuning on 25K and 126K samples, respectively." + }, + { + "type": "table", + "bbox": [ + 0.612, + 0.709, + 0.807, + 0.798 + ], + "angle": 0, + "content": "
ModelAvg.
Qwen2.5-VL-3B31.8
w/ aha-55K21.3
w/ 25K21.6
w/ 126K12.7
" + }, + { + "type": "table_caption", + "bbox": [ + 0.587, + 0.807, + 0.828, + 0.884 + ], + "angle": 0, + "content": "Table 2: Average performance over 6 reasoning benchmarks of Qwen-2.5-VL-3B SFT-ed on different sizes of SFT data and on data containing only examples with aha moment (aha-55K)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.193, + 0.909, + 0.751, + 0.925 + ], + "angle": 0, + "content": "2In this work, Qwen2VL-2B and Qwen2VL-7B refer to the instruction-tuned versions." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.048 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image", + "bbox": [ + 0.178, + 0.067, + 0.505, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.506, + 0.067, + 0.82, + 0.205 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.209, + 0.505, + 0.346 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.209, + 0.82, + 0.346 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.361, + 0.825, + 0.392 + ], + "angle": 0, + "content": "Figure 3: Delta percentage performance change of different models trained with supervised fine-tuning (SFT) only." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.419, + 0.828, + 0.588 + ], + "angle": 0, + "content": "More SFT Data, Worse Performance. Counterintuitively, even a five-fold increase in the supervised dataset (from 25K to 126K instances) often fails to improve performance and in most cases actually harms it. Models trained with 126K SFT samples suffer a relative performance drop of over average \\(14\\%\\) compared to their 25K-trained counterparts over all model and task settings (e.g., 25K: \\(32.2\\%\\) vs. 126K: \\(47.0\\%\\)). This degradation is particularly evident on complex datasets such as WeMath and DynaMath, where the relative decrease reaches as high as \\(97.9\\%\\) over Qwen2.5-VL models on average. Even on mid-difficulty benchmarks like MathVision and MathVerse (i.e., model performance is relatively higher), the 126K SFT models underperform, with an average drop of \\(28.6\\%\\) compared to the untrained model over 4 models. These results suggest that simply scaling up SFT data does not boost generalizable reasoning skills of LLMs, and may instead suppress the model's capacity on various reasoning tasks." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.617, + 0.827, + 0.729 + ], + "angle": 0, + "content": "Larger Models Are Not Immune to SFT Degeneration. Contrary to expectations, scaling up model size does not mitigate the adverse effects of excessive SFT, under heavier SFT they exhibit pronounced drops on the most challenging evaluations. A larger 7B models fine-tuned on 126K examples experience drops nearly identical in magnitude to their smaller 2B or 3B counterparts: \\(47.2\\%\\) for smaller models vs. \\(45.4\\%\\) for larger models compared with base models. Notably, despite the strong performance of Qwen2.5-VL-7B model (e.g., \\(68.1\\%\\) on MathVista), it also suffers an average decline of \\(52.5\\%\\) on these reasoning tasks when SFT-ed with 126K data." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.734, + 0.828, + 0.793 + ], + "angle": 0, + "content": "These findings highlight the limitations of SFT as a tool for enhancing multimodal reasoning. While it may be suitable for learning reasoning formats, it falls short of the expectations for fostering inherent self-reflection. Rather than simply scaling supervision data, our results suggest for a shift toward more advanced training methods like RL." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.813, + 0.812, + 0.835 + ], + "angle": 0, + "content": "4 Improving Multimodal Reasoning with Mixed Rewards" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.854, + 0.828, + 0.926 + ], + "angle": 0, + "content": "The previous section shows that SFT is insufficient to transfer R1's ability to LVLMs on vision-language tasks. Therefore, it is crucial to seek for other post-training methods to elicit the reasoning ability of LVLMs. Since reinforcement learning (RL) is effective in enhancing reasoning ability (Yang et al., 2025a; Kirk et al., 2023), and GRPO has recently been proven more effective and efficient on textual math reasoning task (Shao et al., 2024; Jahn et al.," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.048 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image", + "bbox": [ + 0.178, + 0.105, + 0.818, + 0.263 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.277, + 0.825, + 0.32 + ], + "angle": 0, + "content": "Figure 4: The proposed Mixed Reward Module for GRPO training, comprising 2 reward formats (rule-based and open-ended) and 5 types of verifiable rewards (digit, MCQ, math, IoU and general reasoning)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.349, + 0.825, + 0.379 + ], + "angle": 0, + "content": "2025) than other methods like PPO (Schulman et al., 2017), it motivates us to apply GRPO training for vision-language reasoning tasks." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.386, + 0.825, + 0.418 + ], + "angle": 0, + "content": "Mathematically, let \\( q \\) be a query and \\( \\{o_i\\}_{i=1}^G \\) be a group of \\( G \\) sampled outputs from the old policy model \\( \\pi_{old} \\), GRPO maximizes the following objective:" + }, + { + "type": "equation", + "bbox": [ + 0.171, + 0.424, + 0.82, + 0.459 + ], + "angle": 0, + "content": "\\[\n\\mathcal {J} _ {\\mathrm {G R P O}} (\\theta) = \\mathbb {E} _ {q, \\{o _ {i} \\} \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}}} \\left[ \\frac {1}{G} \\sum_ {i = 1} ^ {G} \\frac {1}{| o _ {i} |} \\sum_ {t = 1} ^ {| o _ {i} |} \\min \\left(r _ {t} (\\theta) \\hat {A} _ {i, t}, \\operatorname {c l i p} (r _ {t} (\\theta), 1 - \\epsilon , 1 + \\epsilon) \\hat {A} _ {i, t}\\right) \\right] - \\beta D _ {\\mathrm {K L}} \\left(\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}\\right)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.459, + 0.825, + 0.491 + ], + "angle": 0, + "content": "where \\(\\hat{A}_{i,t}\\) is the estimated advantage, \\(\\beta\\) is the KL penalty coefficient and \\(\\pi_{\\theta}, \\pi_{\\theta_{\\mathrm{old}}}, \\pi_{\\mathrm{ref}}\\) are current, old, and reference policies, respectively." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.524, + 0.462, + 0.54 + ], + "angle": 0, + "content": "4.1 GRPO with Mixed Reward" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.557, + 0.826, + 0.656 + ], + "angle": 0, + "content": "To better adapt GRPO to multimodal reasoning, in addition to adopting the rule-based reward similar to the textual GRPO training, it is necessary to consider additional characteristics introduced by the vision modality. Inspired by (Fu et al., 2024) which benchmarks LVLMs by perception and cognition (reasoning), we propose a mixed reward framework for GRPO training, as illustrated in Figure 4. The reward system comprises five types of verifiable rewards with two formats, encompassing both visual perception and visual reasoning tasks." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.661, + 0.826, + 0.791 + ], + "angle": 0, + "content": "Rule-Based Reward There are 4 types of rule-based rewards, including digit matching, option letter matching and math expression matching and Intersection over Union for bounding boxes. For digit matching, the model is asked to answer counting questions from CLEVR-Math whose groundtruths are a single digit. For option letter matching, the model is required to answer an MCQ. For math expression matching, the model is asked to solve a math question, such as finding a function expression or the volume of a cone, and output its answers in latex format. We use the Math Verify3 package to check for correctness. For bounding boxes, the model is prompted to output the bounding box coordinates of an object in the image, and an IoU score (range from 0 to 1) is computed as reward." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.796, + 0.826, + 0.895 + ], + "angle": 0, + "content": "Open-ended Reward We leverage InternLM-XComposer2.5-Reward (Zang et al., 2025) as the scorer, denoted as \\( S_{\\theta}(\\cdot) \\), which takes an image and a QA pair as input, and outputs a reward score. Following Muhtar et al. (2025), the reward for a sampled response \\( \\hat{y} \\) is computed as \\( R_{open} = 1 - \\exp(-\\left(S_{\\theta}(\\hat{y}) - S_{\\theta}(y)\\right) \\times \\beta) \\) if \\( f_{\\theta}(\\hat{y}) > f_{\\theta}(y) \\) else 0, where \\( S_{\\theta}(y) \\) is the score of the reference answer, and \\( \\beta \\) is a smoothing hyperparameter. Note that the open-ended reward is normalized into [0,1], which is consistent with the scale of rule-based reward, partially avoiding reward hacking during training." + }, + { + "type": "page_footnote", + "bbox": [ + 0.194, + 0.909, + 0.515, + 0.924 + ], + "angle": 0, + "content": "3https://github.com/huggingface/Math-Verify" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.504, + 0.959 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.103, + 0.827, + 0.218 + ], + "angle": 0, + "content": "Implicit Format Reward Unlike Guo et al. (2025) and its subsequent works which use a separate reward term for format correctness, we discard this format reward term and make the format reward supersede all other rewards. Namely, whenever we are unable to extract a valid response from the raw answer, the reward would be 0. We empirically find that by specifying the output format in system prompt, the model is able to generate answers with correct formats through trials and errors. The implicit format reward design simplifies the reward computation. Further, it may yield better performance since less restriction is imposed on the exploration process (Zeng et al., 2025)." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.243, + 0.51, + 0.262 + ], + "angle": 0, + "content": "4.2 Effect of SFT on GRPO Training" + }, + { + "type": "table", + "bbox": [ + 0.18, + 0.282, + 0.836, + 0.372 + ], + "angle": 0, + "content": "
GRPO BackboneMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
Qwen2VL-7B-Inst59.619.833.915.230.536.032.5
Qwen2VL-7B-Inst+SFT43.714.719.03.211.127.319.8(-39%)
Qwen2VL-7B-Base59.318.233.511.423.236.230.7
Qwen2VL-7B-Base+SFT49.516.425.06.420.432.725.7(-16%)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.382, + 0.825, + 0.424 + ], + "angle": 0, + "content": "Table 3: Benchmark results of models trained with GRPO on different backbones. SFT+GRPO yields performance degradation, indicating that SFT is NOT compatible with GRPO in multimodal reasoning." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.453, + 0.828, + 0.54 + ], + "angle": 0, + "content": "SFT is NOT Compatible with GRPO in Multimodal Reasoning. Although we reveal in Section 3 that SFT alone leads to a performance drop in multimodal reasoning, it is still unclear whether SFT plays a crucial role in aiding GRPO, like the golden key in DeepSeek-R1. We experiment with different backbones for GRPO training. Specifically, we adopt Qwen2VL-7B-Base and Qwen2VL-7B-Inst, and perform SFT on them with 25K samples, followed by GRPO training." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.543, + 0.827, + 0.672 + ], + "angle": 0, + "content": "From Table 3, we observe that models undergoing SFT before GRPO training perform worse than those trained with GRPO alone, presenting an average drop of \\(8.9\\%\\) across Qwen2VL-Base and Qwen2VL-Inst compared to their non-SFT counterparts. We also find that SFT introduces more degradation to instruction models than to base models without instruction-following capabilities. For instance, Qwen2VL-Inst suffers a \\(7.7\\%\\) more drop in performance than Qwen2VL-Base post-SFT, suggesting that SFT can compromise the instruction-following ability crucial for effective GRPO training. Taken together, these results suggest that SFT is currently incompatible with GRPO in the context of multimodal reasoning, impairing both base and instruction-tuned LVLMs." + }, + { + "type": "image", + "bbox": [ + 0.241, + 0.685, + 0.756, + 0.822 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.834, + 0.825, + 0.866 + ], + "angle": 0, + "content": "Figure 5: Impact of SFT with 5K and 10K samples before GRPO. Smaller-sized SFT datasets still jeopardizes GRPO performance." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.895, + 0.825, + 0.927 + ], + "angle": 0, + "content": "Smaller SFT Dataset Still Jeopardizes GRPO Performance. Since we reveal in Section 3.2 that more SFT data yields lower performance, we try to investigate the effect of downsizing" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.825, + 0.148 + ], + "angle": 0, + "content": "the SFT training set. Following the PPL filtering method in Section 3, we select top-10K and top-5K samples from VLAA-Thinking-SFT-126K to finetune Qwen2.5-VL-3B, followed by GRPO training. For comparison, we also conduct GRPO training without SFT." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.153, + 0.825, + 0.226 + ], + "angle": 0, + "content": "We present the performance of Qwen2.5-VL-3B on each task in Figure 5. A clear observation is that applying SFT on 5K examples prior to GRPO significantly degrades performance compared to using GRPO alone, showing an average drop of \\(13.5\\%\\). Moreover, scaling up SFT data to 10K yields only a marginal improvement of \\(0.8\\%\\). These results further support that SFT before GRPO can hinder the model's learning capability." + }, + { + "type": "image", + "bbox": [ + 0.191, + 0.237, + 0.498, + 0.352 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.237, + 0.804, + 0.353 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.354, + 0.825, + 0.41 + ], + "angle": 0, + "content": "Figure 6: Response length (left) and reward (right) during training. Training with only GRPO yields the lowest response length and yet the highest final reward and best benchmark performance, indicating that response length, reward, and model performance are NOT necessarily related." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.421, + 0.827, + 0.534 + ], + "angle": 0, + "content": "Response Length, Reward, and Model Performance are NOT Necessarily Related. Prior work in RL suggests that longer responses often correlate with better reasoning and higher RL rewards (Guo et al., 2025; Zhou et al., 2025; Chen et al., 2025b). However, our findings in Figure 6 reveal that response length and reward in GRPO are not reliable indicators of reasoning ability. For instance, the 10K SFT+GRPO model produces the longest responses but ends up with lower rewards than the GRPO-only model (\\(\\sim 0.35\\) vs. \\(\\sim 0.5\\)) after training. Similarly, the 5K SFT+GRPO variant shows moderate length and reward but still underperforms on downstream tasks." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.54, + 0.827, + 0.68 + ], + "angle": 0, + "content": "Interestingly, both SFT-ed models start with higher initial rewards (e.g., \\(\\sim 0.20\\) for \\(10\\mathrm{K}\\) SFT+GRPO vs. \\(\\sim 0.05\\) for GRPO-only), which is likely due to their early learning experience with supervision since SFT and GRPO data share the same distribution. However, they exhibit limited reward improvement during training, whereas the GRPO-only model rapidly surpasses them. These trends further reveal that SFT solely provides a higher \"lower bound\" for RL training, yet it may lower the \"upper bound\" since the reasoning SFT data constrains the model's exploration paths. Therefore, reasoning is a native emerging ability that is more likely to be developed through RL, not SFT. While SFT-ed models may appear to reason, their behavior is closer to pattern imitation — a form of pseudo-reasoning that lacks the generalizable reasoning skills." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.706, + 0.479, + 0.725 + ], + "angle": 0, + "content": "4.3 GRPO Training without SFT" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.738, + 0.827, + 0.795 + ], + "angle": 0, + "content": "Following the findings in the previous section, we directly conduct GRPO training which yields four models: VLAA-Thinker-Qwen2-VL-2B, VLAA-Thinker-Qwen2-VL-7B, VLAA-Thinker-Qwen2.5-VL-3B, VLAA-Thinker-Qwen2.5-VL-7B. We also train on a base model of Qwen2-VL-7B, and the resulting model is named VLAA-Thinker-Qwen2-7B-Zero." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.801, + 0.825, + 0.872 + ], + "angle": 0, + "content": "We sample 4 times for each query with temperature 0.8. Rollout and training batch size are set as 512 and 256, respectively. We train our model for 1 episode (outer loop) and 1 epoch per episode (inner loop) on \\(8^{*}\\mathrm{H}100\\) GPUs with 49 steps. More details of training setup are in Appendix C.1. We follow the identical evaluation setup as described in Section 3.1. We present evaluation results in Table 4 and list our main findings below." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.896, + 0.827, + 0.926 + ], + "angle": 0, + "content": "Direct GRPO Training Boosts Model Performance. Models trained directly with GRPO on the VL-Thinking RL consistently outperform their respective base models. For example," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.959 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "table", + "bbox": [ + 0.18, + 0.102, + 0.822, + 0.324 + ], + "angle": 0, + "content": "
ModelMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
4B-scale LVLMs
Qwen2-VL-2B48.016.117.53.810.826.620.5
Qwen2.5-VL-3B61.221.931.213.222.940.331.8
VLM-R1-Math-030562.721.932.213.030.040.533.4
VLAA-Thinker-Qwen2-2B43.614.819.03.412.630.420.3
VLAA-Thinker-Qwen2.5-3B61.024.436.418.233.838.535.4
7B-scale LVLMs
LLaVA-OneVision-7B58.618.319.39.020.933.326.6
InternLM-XComposer2.564.017.816.28.214.134.725.8
InternVL2.5-8B64.517.022.89.423.536.028.9
InternVL2-8B58.320.020.49.220.233.626.9
Qwen2-VL-7B61.619.225.411.022.333.328.8
Qwen2.5-VL-7B68.125.441.121.836.247.940.1
VLAA-Thinker-Qwen2-7B-Zero59.318.233.511.423.236.230.7
VLAA-Thinker-Qwen2-7B59.619.833.915.230.536.032.5
VLAA-Thinker-Qwen2.5-7B68.026.448.222.441.548.542.5
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.333, + 0.828, + 0.364 + ], + "angle": 0, + "content": "Table 4: Evaluation results of 6 math reasoning benchmarks on Open LMM Leaderboard. VLAA-Thinker models significantly outperform baselines and other models." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.391, + 0.828, + 0.477 + ], + "angle": 0, + "content": "at the 7B scale, two models trained on VL-Thinking achieve an average score of \\(36.5\\%\\), marking a \\(2.0\\%\\) improvement over their base model average of \\(34.5\\%\\). Moreover, our best-performing 7B model consistently outperforms other similarly sized LVLMs (e.g., InternVL2.5-8B, LLaVA-OneVision-7B), while our 3B model surpasses the recent reasoning-focused model, VLM-R1-Math, by \\(1.1\\%\\) on average. These results once again demonstrate that GRPO significantly enhances reasoning capabilities, even without additional SFT." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.487, + 0.828, + 0.6 + ], + "angle": 0, + "content": "Stronger Instruction Model Leads to Better Post-GRPO Reasoning. An interesting observation is that model with better instruction tuning generally performs better. The instruction-aligned Qwen2-7B model, after GRPO, outperforms its unaligned counterpart VLAA-Thinker-Qwen2-7B-Zero by \\(1.8\\%\\) on average \\((31.3\\%\\) vs. \\(29.5\\%)\\), with notable gains on harder tasks like DynaMath \\((5.0\\%)\\) and WeMath \\((3.1\\%)\\). Moreover, using a stronger instruction-tuned model for GRPO further improves across both 3B and 7B scales — VLAA-Thinker-Qwen2.5 surpasses VLAA-Thinker-Qwen2 by \\(12.6\\%\\) on average, confirming that higher-quality instruction tuning leads to more effective post-RL reasoning." + }, + { + "type": "image", + "bbox": [ + 0.277, + 0.601, + 0.688, + 0.751 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.767, + 0.825, + 0.798 + ], + "angle": 0, + "content": "Figure 7: Heatmap of different \"aha\" expressions generated by VLAA-Thinker models during training." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.813, + 0.828, + 0.926 + ], + "angle": 0, + "content": "Emergence of Authentic Aha Moments. To show that our GRPO training can induce authentic self-reflection process, we plot the frequency of four aha expressions (\"alternatively\", \"double-check\", \"i should check\", \"wait\") for each VLAA-Thinker model in Figure 7. Since all models are trained using GRPO without being SFT-ed on distilled reasoning paths, all aha moments emerge from the GRPO process, demonstrating the model's self-developed reflective ability. Another finding is that the number of aha moments is not directly correlate with overall model performance, as more aha moments do not necessarily translate to higher reasoning scores." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.102, + 0.31, + 0.119 + ], + "angle": 0, + "content": "4.4 Ablations" + }, + { + "type": "table", + "bbox": [ + 0.182, + 0.148, + 0.825, + 0.288 + ], + "angle": 0, + "content": "
RowMethodDigitMathMCQIoUOpen-endedMViMVsWM
0Qwen2.5-VL-3B21.931.222.9
1w/o Digit23.534.628.8
2w/o Math21.432.727.0
3w/o MCQ21.533.918.4
4w/o IoU22.835.330.0
5All Rule-Based22.234.930.1
6Mixed Reward24.436.433.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.297, + 0.825, + 0.327 + ], + "angle": 0, + "content": "Table 5: Ablation of Mixed Reward on MVi: MathVision, MVs: MathVerse and WM: WeMath. A combination of rule-based and open-ended rewards yields significant boost in performance." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.358, + 0.828, + 0.485 + ], + "angle": 0, + "content": "Mixed Reward. To demonstrate the effectiveness of our mixed reward strategy, we perform an ablation study on Qwen2.5-VL-3B by selectively disabling individual reward components and evaluating performance across three math reasoning benchmarks, as shown in Table 5. The model trained with Mixed Reward achieves the best overall performance, with an average improvement of \\(6.2\\%\\) over the baseline, demonstrating the effectiveness of our reward design. Using only rule-based rewards (All Rule-Based) also yields consistent gains (e.g., \\(29.1\\%\\) vs. \\(25.3\\%\\) baseline), while removing specific components—especially MCQ (w/o MCQ) leads to substantial drops. These results highlight the critical role of rule-based rewards in GRPO for multimodal reasoning tasks." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.513, + 0.581, + 0.717 + ], + "angle": 0, + "content": "Hyperparameters To search for better hyperparameters, we experiment with different learning rates (LR) and KL divergence settings on Qwen2.5-VL-3B. We start with a basic setting where LR anneals to zero following a cosine scheduler with no KL constraint. Results are shown in Table 6. LR1 uses a minimum learning rate of \\(8e^{-7}\\) with warmup ratio \\(10\\%\\), whereas LR2 uses a minimum learning rate of \\(5e^{-7}\\) with warmup ratio \\(3\\%\\). Since LR2 performs slightly better than LR1, we compare two KL settings on top of LR2. KL1 uses an initial KL of \\(1e^{-2}\\) and a target KL of \\(5e^{-3}\\), whereas KL2 uses an initial KL coefficient of \\(1e^{-3}\\) and a target KL of \\(5e^{-4}\\). We find that introducing KL constraints significantly improves the performance on MathVerse and DynaMath by \\(1.1\\%\\) and" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.717, + 0.77, + 0.733 + ], + "angle": 0, + "content": "\\(3.2\\%\\), respectively, and that using a smaller KL can encourage the model to evolve." + }, + { + "type": "table", + "bbox": [ + 0.598, + 0.514, + 0.817, + 0.645 + ], + "angle": 0, + "content": "
SettingsMVsDMLV
Basic31.715.038.5
Learning Rate
+ LR133.016.038.1
+ LR233.515.638.3
KL Coef.
+ KL134.418.837.8
+ KL235.818.639.2
" + }, + { + "type": "table_footnote", + "bbox": [ + 0.587, + 0.654, + 0.828, + 0.695 + ], + "angle": 0, + "content": "Table 6: Ablation on LR and KL Coef. on MVs: MathVerse, DM: DynaMath and LV: LogicVista." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.762, + 0.322, + 0.781 + ], + "angle": 0, + "content": "4.5 Case Study" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.798, + 0.828, + 0.898 + ], + "angle": 0, + "content": "We provide an example showcasing the improvement of VLAA-Thinker over the original model in Appendix C.3. Qwen2.5VL-7B generates lengthy response with wrong reasoning traces. Although it outputs some self-reflective patterns like \"re-evaluate\", the final answer remains wrong. On the other hand, VLAA-Thinker-Qwen2.5VL-7B is able to reason on the right track, with only a minor mistake near the end of its thinking process. Nevertheless, the high-level idea and reasoning process is overall correct, demonstrating strong capability of solving complex reasoning tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.1, + 0.36, + 0.119 + ], + "angle": 0, + "content": "5 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.145, + 0.828, + 0.37 + ], + "angle": 0, + "content": "Vision-Language Reasoning Models. Recent advances in vision-language (VL) reasoning models build on the success of text-only reasoning systems like OpenAI's o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). Earlier VL methods, such as few-shot prompting and chain-of-thought (CoT), offered limited visual reasoning (Brown et al., 2020; Wei et al., 2022). Recently, LLaVA-CoT (Xu et al., 2024) adopts an SFT approach a 4-step structured outputs to enhance model's reasoning, yet lacking flexibility due to its rigid output format. More recently, newer models incorporate more natural reasoning traces and reinforcement learning. VLM-R1 (Shen et al., 2025) and R1-V (Chen et al., 2025a) align multimodal LLMs using step-by-step reasoning and policy optimization. VisualThinker-R1-Zero (Zhou et al., 2025) goes further by training a 2B model via pure RL from scratch, achieving emergent inner reasoning. LMM-R1 (Peng et al., 2025) transfers CoT skills from language to vision through staged RL. Vision-R1 (Huang et al., 2025) combines reasoning trace supervision and RL with correctness and format rewards to train a strong 7B VL reasoner. Different from these concurrent works, we propose a high-quality multimodal reasoning dataset with R1-like reasoning traces for both SFT and RL, and provide a comprehensive study on training paradigms." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.375, + 0.828, + 0.558 + ], + "angle": 0, + "content": "Reward Modeling in Reinforcement Learning. Reward design plays a central role in reasoning-oriented RL. While model-based rewards offer flexibility (Kwon et al., 2023; Wang et al., 2024a; Gao et al., 2024), they are prone to reward hacking (Eisenstein et al., 2023; Chen et al., 2024b; Fu et al., 2025), making them risky for reasoning tasks. Recent VL models prefer binary correctness rewards (Huang et al., 2025; Zhou et al., 2025) for math or QA tasks, directly reinforcing accurate outputs. Others apply rule-based rewards, enforcing structured formats or logic chains (Liu et al., 2025; Deng et al., 2025a). While recent studies deploy strong reward models for enhancing LVLM reasoning, they are grounded by specific domains or simpler tasks (Muhtar et al., 2025; Tu et al., 2025). GRPO-style methods use relative ranking within output batches to guide optimization without value critics (Shao et al., 2024; Guo et al., 2025). Our Mix Reward objective combines the model-based and rule-based reward in four complex rewarding scenarios, yielding better performance than existing approaches." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.617, + 0.334, + 0.636 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.662, + 0.828, + 0.79 + ], + "angle": 0, + "content": "This work provides a comparative analysis on the effectiveness of leveraging SFT or RL (more specifically, GRPO) to build LVLM with strong reasoning ability. We show by extensive experiments that distilling reasoning data and performing SFT is a deficient way to transfer reasoning ability across modalities. We then extend our dataset to GRPO training with a proposed mixed reward objective, which yields substantial improvement over the baseline models. We present several findings regarding combining SFT and GRPO and the correlation between reward, respond length, and final performance. These results indicate that reasoning is a native emerging ability acquired from RL, rather than SFT, which merely equips the model with 'pseudo-reasoning' ability." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.849, + 0.382, + 0.87 + ], + "angle": 0, + "content": "Acknowledgement" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.895, + 0.825, + 0.926 + ], + "angle": 0, + "content": "We thank the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.101, + 0.295, + 0.119 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.13, + 0.828, + 0.176 + ], + "angle": 0, + "content": "Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L Leavitt, and Mansheej Paul. Perplexed by perplexity: Perplexity-based data pruning with small reference models. arXiv preprint arXiv:2405.20541, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.182, + 0.827, + 0.226 + ], + "angle": 0, + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.233, + 0.827, + 0.288 + ], + "angle": 0, + "content": "Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for lite vision-language models. arXiv preprint arXiv:2402.11684, 2024a." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.297, + 0.827, + 0.34 + ], + "angle": 0, + "content": "Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025a. Accessed: 2025-02-02." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.348, + 0.825, + 0.391 + ], + "angle": 0, + "content": "Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319, 2024b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.399, + 0.827, + 0.442 + ], + "angle": 0, + "content": "Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, et al. An empirical study on eliciting and improving r1-like reasoning models. arXiv preprint arXiv:2503.04548, 2025b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.449, + 0.827, + 0.493 + ], + "angle": 0, + "content": "Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.5, + 0.825, + 0.543 + ], + "angle": 0, + "content": "Huilin Deng, Ding Zou, Rui Ma, Hongchen Luo, Yang Cao, and Yu Kang. Boosting the generalization and reasoning of vision language models with curriculum reinforcement learning. arXiv preprint arXiv:2503.07065, 2025a." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.55, + 0.827, + 0.594 + ], + "angle": 0, + "content": "Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352, 2025b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.601, + 0.825, + 0.658 + ], + "angle": 0, + "content": "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 11198-11201, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.666, + 0.827, + 0.723 + ], + "angle": 0, + "content": "Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D'Amour, DJ Dvi-jotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.73, + 0.827, + 0.786 + ], + "angle": 0, + "content": "Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. URL https://arxiv.org/abs/2306.13394." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.794, + 0.825, + 0.825 + ], + "angle": 0, + "content": "Jiayi Fu, Xuandong Zhao, Chengyuan Yao, Heng Wang, Qi Han, and Yanghua Xiao. Reward shaping to mitigate reward hacking in rlhf. arXiv preprint arXiv:2502.18770, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.831, + 0.825, + 0.875 + ], + "angle": 0, + "content": "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.882, + 0.827, + 0.925 + ], + "angle": 0, + "content": "Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, and Yi Wu. On designing effective rl reward at training time for llm reasoning. arXiv preprint arXiv:2410.15115, 2024." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.13, + 0.828, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.828, + 0.148 + ], + "angle": 0, + "content": "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.157, + 0.827, + 0.215 + ], + "angle": 0, + "content": "Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608-3617, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.224, + 0.827, + 0.268 + ], + "angle": 0, + "content": "Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.277, + 0.825, + 0.322 + ], + "angle": 0, + "content": "Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.331, + 0.827, + 0.375 + ], + "angle": 0, + "content": "Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.385, + 0.827, + 0.43 + ], + "angle": 0, + "content": "Afrar Jahin, Arif Hassan Zidan, Yu Bao, Shizhe Liang, Tianming Liu, and Wei Zhang. Unveiling the mathematical reasoning in deepseek models: A comparative study of large language models. arXiv preprint arXiv:2503.10573, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.439, + 0.827, + 0.497 + ], + "angle": 0, + "content": "Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901-2910, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.506, + 0.827, + 0.551 + ], + "angle": 0, + "content": "Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the effects of rlhf on llm generalisation and diversity. arXiv preprint arXiv:2310.06452, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.56, + 0.825, + 0.591 + ], + "angle": 0, + "content": "Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.6, + 0.827, + 0.644 + ], + "angle": 0, + "content": "Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024a." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.653, + 0.827, + 0.697 + ], + "angle": 0, + "content": "Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. arXiv preprint arXiv:2402.00530, 2024b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.706, + 0.827, + 0.749 + ], + "angle": 0, + "content": "Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.76, + 0.825, + 0.804 + ], + "angle": 0, + "content": "Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.814, + 0.827, + 0.871 + ], + "angle": 0, + "content": "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR), 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.881, + 0.827, + 0.926 + ], + "angle": 0, + "content": "Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2200-2209, 2021." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.148 + ], + "angle": 0, + "content": "Dilxat Muhtar, Enzhuo Zhang, Zhenshi Li, Feng Gu, Yanglangxing He, Pengfeng Xiao, and Xueliang Zhang. Quality-driven curation of remote sensing vision-language data via learned scoring models. arXiv preprint arXiv:2503.00743, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.827, + 0.201 + ], + "angle": 0, + "content": "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.209, + 0.825, + 0.265 + ], + "angle": 0, + "content": "Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.275, + 0.825, + 0.306 + ], + "angle": 0, + "content": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.315, + 0.826, + 0.358 + ], + "angle": 0, + "content": "Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.368, + 0.826, + 0.411 + ], + "angle": 0, + "content": "Haozhan Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. https://github.com/om-ai-lab/VLM-R1, 2025. Accessed: 2025-02-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.421, + 0.827, + 0.45 + ], + "angle": 0, + "content": "Haoqin Tu, Weitao Feng, Hardy Chen, Hui Liu, Xianfeng Tang, and Cihang Xie. Vilbench: A suite for vision-language process reward modeling. arXiv preprint arXiv:2503.20271, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.46, + 0.827, + 0.503 + ], + "angle": 0, + "content": "Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080, 2024a." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.513, + 0.826, + 0.569 + ], + "angle": 0, + "content": "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id=QWTCxMpPA." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.579, + 0.825, + 0.621 + ], + "angle": 0, + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.632, + 0.825, + 0.661 + ], + "angle": 0, + "content": "Yijia Xiao, Edward Sun, Tianyu Liu, and Wei Wang. Logicvista: Multimodal llm logical reasoning benchmark in visual contexts. arXiv preprint arXiv:2407.04973, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.671, + 0.825, + 0.7 + ], + "angle": 0, + "content": "Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024. URL https://arxiv.org/abs/2411.10440." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.71, + 0.825, + 0.753 + ], + "angle": 0, + "content": "Haoyan Yang, Ting Hua, Shangqian Gao, Binfeng Xu, Zheng Tang, Jie Xu, Hongxia Jin, and Vijay Srinivasan. Dynamic noise preference optimization for llm self-improvement via synthetic data. arXiv preprint arXiv:2502.05400, 2025a." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.763, + 0.827, + 0.817 + ], + "angle": 0, + "content": "Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615, 2025b." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.829, + 0.825, + 0.872 + ], + "angle": 0, + "content": "Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, et al. Internlm-xcomposer2. 5-reward: A simple yet effective multi-modal reward model. arXiv preprint arXiv:2501.12368, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.882, + 0.825, + 0.925 + ], + "angle": 0, + "content": "Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892, 2025." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.827, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.825, + 0.162 + ], + "angle": 0, + "content": "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision, pp. 169-186. Springer, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.169, + 0.825, + 0.212 + ], + "angle": 0, + "content": "Hengguang Zhou, Xinui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero's\" aha moment\" in visual reasoning on a 2b non-sft model. arXiv preprint arXiv:2503.05132, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.221, + 0.825, + 0.263 + ], + "angle": 0, + "content": "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.272, + 0.825, + 0.316 + ], + "angle": 0, + "content": "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836, 2024." + }, + { + "type": "list", + "bbox": [ + 0.175, + 0.103, + 0.825, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.1, + 0.398, + 0.12 + ], + "angle": 0, + "content": "A Data Generation" + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.141, + 0.295, + 0.16 + ], + "angle": 0, + "content": "A.1 Prompt" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.173, + 0.825, + 0.203 + ], + "angle": 0, + "content": "We show the prompts for captioning (Figure 8), R1 answer distillation (Figure 9), rewriting (Figure 10) and verification (Figure 11)." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.217, + 0.296, + 0.234 + ], + "angle": 0, + "content": "Prompt for Captioning" + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.242, + 0.699, + 0.256 + ], + "angle": 0, + "content": "You are a vision-language model generating a highly detailed caption of an image." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.257, + 0.632, + 0.269 + ], + "angle": 0, + "content": "Summarize the environment or setting (indoor/outdoor, surroundings)." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.269, + 0.689, + 0.281 + ], + "angle": 0, + "content": "Describe visible objects, people, or structures (colors, shapes, textures, positions)." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.281, + 0.859, + 0.294 + ], + "angle": 0, + "content": "Transcribe all text verbatim. For equations, use LaTeX when appropriate but do not solve or interpret them." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.294, + 0.68, + 0.306 + ], + "angle": 0, + "content": "If structured data (tables, charts) appears, use Markdown formatting for clarity." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.306, + 0.74, + 0.319 + ], + "angle": 0, + "content": "Include labels, annotations, brand names, or logos, if any, otherwise don't mention them." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.319, + 0.666, + 0.332 + ], + "angle": 0, + "content": "Note any visible expressions or emotional tone factually, without speculation." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.332, + 0.56, + 0.344 + ], + "angle": 0, + "content": "## Maintain a logical order: from overall context to finer details." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.344, + 0.572, + 0.357 + ], + "angle": 0, + "content": "## Provide only the caption without extra context or commentary." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.357, + 0.85, + 0.37 + ], + "angle": 0, + "content": "## Be unbiased and faithful in your description, using natural language and Markdown only where relevant." + }, + { + "type": "list", + "bbox": [ + 0.135, + 0.242, + 0.859, + 0.37 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.312, + 0.398, + 0.684, + 0.415 + ], + "angle": 0, + "content": "Figure 8: Prompt for captioning with GPT-4-Turbo." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.435, + 0.294, + 0.451 + ], + "angle": 0, + "content": "Prompt for Distillation" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.46, + 0.862, + 0.501 + ], + "angle": 0, + "content": "You have advanced visual perception abilities and can directly analyze images as if you are looking at them. You will be provided with detailed visual descriptions, but you should interpret them as if they represent your actual visual understanding rather than text-based captions." + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.51, + 0.862, + 0.551 + ], + "angle": 0, + "content": "Answer questions as if you are visually perceiving the scene, not reading a caption. Provide natural and confident responses about objects, relationships, and numerical or spatial reasoning. Use a descriptive, visually grounded tone, avoiding mention of text." + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.56, + 0.862, + 0.602 + ], + "angle": 0, + "content": "Never mention that you are reading text or captions. Infer spatial relationships, numerical properties, and logical conclusions based on the perceived \"image.\" If information is unclear, respond naturally as if there are visual limitations (e.g., 'It appears that...')." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.611, + 0.201, + 0.64 + ], + "angle": 0, + "content": "Caption: {caption}" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.649, + 0.211, + 0.678 + ], + "angle": 0, + "content": "Question: {question}" + }, + { + "type": "image_caption", + "bbox": [ + 0.311, + 0.714, + 0.685, + 0.731 + ], + "angle": 0, + "content": "Figure 9: Prompt for distillation with Deepseek-R1." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.762, + 0.431, + 0.781 + ], + "angle": 0, + "content": "A.2 Aha-Moment Filtering" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.794, + 0.828, + 0.839 + ], + "angle": 0, + "content": "We use the following list of keywords to identify aha moments: wait, again, double-check, hmm, mistake, alternatively, check, i should confirm. All answers are matched with the logic: has_aha = any([aha in text.lower() for aha in ahas])." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.864, + 0.686, + 0.881 + ], + "angle": 0, + "content": "A.3 Sample Demonstration for VLAA-Thinking-SFT-126K" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.895, + 0.828, + 0.927 + ], + "angle": 0, + "content": "We show several examples from VLAA-Thinking-SFT-126K in Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.105, + 0.286, + 0.12 + ], + "angle": 0, + "content": "Prompt for Rewriting" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.129, + 0.861, + 0.158 + ], + "angle": 0, + "content": "You will receive a snippet of text that references a \"description\" or \"caption\" of an image. Your task is to produce a **nearly identical** version of that text with **minimal** changes, focusing on the following:" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.167, + 0.861, + 0.181 + ], + "angle": 0, + "content": "1. **Replace references to \"description\", \"caption\" and \"rationale\"* with wording that references *** the image.\"**" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.181, + 0.627, + 0.194 + ], + "angle": 0, + "content": "- For example, \"The description says...\" could become \"The image shows...\"" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.194, + 0.563, + 0.206 + ], + "angle": 0, + "content": "- \"The caption suggests...\" could become \"The image suggests...\"" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.206, + 0.565, + 0.219 + ], + "angle": 0, + "content": "- \"Based on the rationale...\" could become \"Based on the image...\"" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.219, + 0.734, + 0.233 + ], + "angle": 0, + "content": "- Make sure the replacement sounds natural but does \\(^{**}\\)not\\*\\* otherwise change the meaning." + }, + { + "type": "list", + "bbox": [ + 0.133, + 0.167, + 0.861, + 0.233 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.242, + 0.861, + 0.27 + ], + "angle": 0, + "content": "2. **Preserve all line breaks, punctuation, and spacing** as much as possible, and make **no additional edits** outside of these replacements." + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.28, + 0.451, + 0.295 + ], + "angle": 0, + "content": "3. You should only output the rewritten content." + }, + { + "type": "list", + "bbox": [ + 0.133, + 0.242, + 0.861, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.306, + 0.253, + 0.334 + ], + "angle": 0, + "content": "Here is the input: {input}" + }, + { + "type": "image_caption", + "bbox": [ + 0.285, + 0.361, + 0.712, + 0.379 + ], + "angle": 0, + "content": "Figure 10: Prompt for answer rewriting with GPT-4-Turbo." + }, + { + "type": "title", + "bbox": [ + 0.143, + 0.396, + 0.295, + 0.412 + ], + "angle": 0, + "content": "Prompt for Verification" + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.421, + 0.294, + 0.434 + ], + "angle": 0, + "content": "You are a fair evaluator." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.434, + 0.54, + 0.447 + ], + "angle": 0, + "content": "You will be given a groundtruth and an answer from a model." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.447, + 0.668, + 0.46 + ], + "angle": 0, + "content": "If the answer aligns with the groundtruth, output \"Yes\". Otherwise, output \"No\"." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.46, + 0.417, + 0.473 + ], + "angle": 0, + "content": "Your output should only be \"Yes\" or \"No\"." + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.484, + 0.224, + 0.511 + ], + "angle": 0, + "content": "groundtruth: {gold}" + }, + { + "type": "text", + "bbox": [ + 0.137, + 0.524, + 0.191, + 0.549 + ], + "angle": 0, + "content": "answer: {pred}" + }, + { + "type": "image_caption", + "bbox": [ + 0.3, + 0.577, + 0.696, + 0.594 + ], + "angle": 0, + "content": "Figure 11: Prompt for verification with GPT-3.5-Turbo." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.617, + 0.51, + 0.638 + ], + "angle": 0, + "content": "B Details of SFT Experiments" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.659, + 0.303, + 0.678 + ], + "angle": 0, + "content": "B.1 Training" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.691, + 0.828, + 0.749 + ], + "angle": 0, + "content": "To enhance the instruction following ability, we append task-specific instructions (i.e., MCQ, short answer) to questions. The system prompt shown in Figure 12 is used. We use a global batch size of 128. Models are trained for 190 steps on 25K samples and 985 steps on 126K samples. All experiments are run on \\(8^{*}\\mathrm{H}100\\) GPUs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.754, + 0.826, + 0.799 + ], + "angle": 0, + "content": "Interestingly, we observe loss spikes for 25K SFT training on Qwen2-VL-7B which causes model collapse. Therefore, we run the settings for multiple times until we obtain a normal loss curve, and use that checkpoint for evaluation." + }, + { + "type": "text", + "bbox": [ + 0.211, + 0.816, + 0.784, + 0.884 + ], + "angle": 0, + "content": "You are VL-Thinking, a helpful assistant with excellent reasoning ability. A user asks you a question, and you should try to solve it. You should first think about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within and tags, respectively, i.e., reasoning process here answer here ." + }, + { + "type": "image_caption", + "bbox": [ + 0.282, + 0.896, + 0.714, + 0.913 + ], + "angle": 0, + "content": "Figure 12: System Prompt used for training and evaluation." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.102, + 0.321, + 0.118 + ], + "angle": 0, + "content": "B.2 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.134, + 0.825, + 0.205 + ], + "angle": 0, + "content": "We adopt VLMEvalKit (Duan et al., 2024) for all evaluation experiments. We set use(custom_prompt to False following the settings of most models in the toolkit. For higher efficiency, we set maxPixels to \\(256^{*}32^{*}32\\), and max_new_tokens to 800. We also set system prompt as the one we used for training for a consistent training-test behavior. The other hyperparameters are default to the original toolkit." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.21, + 0.556, + 0.227 + ], + "angle": 0, + "content": "We specify the split of datasets and metrics reported:" + }, + { + "type": "text", + "bbox": [ + 0.21, + 0.238, + 0.732, + 0.253 + ], + "angle": 0, + "content": "1. MathVista: The Test Mini split of MathVista dataset; overall accuracy." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.258, + 0.677, + 0.273 + ], + "angle": 0, + "content": "2. MathVision: The Full test set of MathVision; overall accuracy." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.276, + 0.757, + 0.291 + ], + "angle": 0, + "content": "3. MathVerse: The Test Mini split of MathVerse; accuracy of \"Vision Only\"." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.295, + 0.663, + 0.31 + ], + "angle": 0, + "content": "4. DynaMath: The Full test set of DynaMath; overall accuracy." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.314, + 0.641, + 0.329 + ], + "angle": 0, + "content": "5. WeMath: The Test Mini split of WeMath; \"Score (Strict)\"." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.333, + 0.659, + 0.348 + ], + "angle": 0, + "content": "6. LogicVista: The Full test set of LogicVista; overall accuracy." + }, + { + "type": "list", + "bbox": [ + 0.209, + 0.238, + 0.757, + 0.348 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.377, + 0.539, + 0.399 + ], + "angle": 0, + "content": "C Details of GRPO Experiments" + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.418, + 0.304, + 0.436 + ], + "angle": 0, + "content": "C.1 Training" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.45, + 0.825, + 0.508 + ], + "angle": 0, + "content": "We adapt our code from OpenRLHF framework (Hu et al., 2024). To suit for our need of deploying a reward model on the same machine, we offload the reward model to CPU and only move it to GPU when performing rollouts and scoring. This design saves valuable GPU memory which accelerates the training process." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.513, + 0.827, + 0.598 + ], + "angle": 0, + "content": "We also perform dataset-specific inspection and find some issues for several datasets. For example, although ArxivQA contains only MCQ, the answer format includes \"A\", \"A)\", \"(a)\", etc. And in the synthesis subset of Math PUMA, we find that some solutions only contain the value of solved unknown variables when the questions ask to output the entire function expression. We fix these issues by rule-based filtering and GPT-assisted rewriting, aiming to improve the quality of the VL-Thinking dataset." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.625, + 0.322, + 0.641 + ], + "angle": 0, + "content": "C.2 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.657, + 0.725, + 0.673 + ], + "angle": 0, + "content": "We evaluate our models with an identical setting described in Appendix B.2." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.7, + 0.327, + 0.718 + ], + "angle": 0, + "content": "C.3 Case Study" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.731, + 0.825, + 0.762 + ], + "angle": 0, + "content": "We present a case demonstrating the improvement of VLAA-Thinker-Qwen2.5VL-7B over its backbone in Figure 13." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image_footnote", + "bbox": [ + 0.251, + 0.202, + 0.388, + 0.219 + ], + "angle": 0, + "content": "As shown in the figure, the angle \\(O\\) to circle \\(A\\) at the center of triangle \\(B\\), \\(AB\\), and \\(AC\\). \nChoices: A: \\(36^{\\circ}\\); B: \\(54^{\\circ}\\); C: \\(60^{\\circ}\\); D: \\(27^{\\circ}\\)." + }, + { + "type": "image", + "bbox": [ + 0.271, + 0.224, + 0.379, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.554, + 0.218, + 0.662, + 0.231 + ], + "angle": 0, + "content": "Question in Image" + }, + { + "type": "text", + "bbox": [ + 0.427, + 0.233, + 0.78, + 0.256 + ], + "angle": 0, + "content": "As shown in the figure, if angle \\( A = 36.0 \\), then angle \\( C \\) is equal to () \nChoices: A: \\( 36^{\\circ} \\) B: \\( 54^{\\circ} \\) C: \\( 60^{\\circ} \\) D: \\( 27^{\\circ} \\)" + }, + { + "type": "title", + "bbox": [ + 0.455, + 0.295, + 0.543, + 0.306 + ], + "angle": 0, + "content": "Qwen2.5VL-7B" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.306, + 0.586, + 0.317 + ], + "angle": 0, + "content": "To solve the problem, we need to use some properties of circles and tangents." + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.327, + 0.382, + 0.337 + ], + "angle": 0, + "content": "1. Identify the given information:" + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.338, + 0.395, + 0.347 + ], + "angle": 0, + "content": "- AB is tangent to circle O at point B." + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.348, + 0.348, + 0.357 + ], + "angle": 0, + "content": "- The angle \\(\\angle AOB = 36^{\\circ}\\)" + }, + { + "type": "list", + "bbox": [ + 0.225, + 0.338, + 0.395, + 0.357 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.358, + 0.368, + 0.368 + ], + "angle": 0, + "content": "2. Understand the properties:" + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.368, + 0.773, + 0.378 + ], + "angle": 0, + "content": "- Since \\(AB\\) is tangent to the circle at \\(B\\), the radius \\(OB\\) is perpendicular to the tangent \\(AB\\). Therefore, \\(\\angle OBA = 90\\)." + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.379, + 0.568, + 0.388 + ], + "angle": 0, + "content": "-In triangle \\(OAB\\) , the sum of the angles is \\(180^{\\circ}\\) . So, we can find \\(\\angle AOB\\) .." + }, + { + "type": "list", + "bbox": [ + 0.225, + 0.368, + 0.773, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.416, + 0.389, + 0.582, + 0.398 + ], + "angle": 0, + "content": "\\(\\angle AOB + \\angle OBA + \\angle OAB = 180^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.433, + 0.4, + 0.564, + 0.407 + ], + "angle": 0, + "content": "\\(36^{\\circ} + 90^{\\circ} + \\angle OAB = 180^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.465, + 0.409, + 0.533, + 0.417 + ], + "angle": 0, + "content": "\\(\\angle OAB = 54^{\\circ}\\)" + }, + { + "type": "title", + "bbox": [ + 0.218, + 0.419, + 0.382, + 0.428 + ], + "angle": 0, + "content": "3. Determine the measure of \\(\\angle C\\)" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.429, + 0.743, + 0.44 + ], + "angle": 0, + "content": "- Since \\(\\angle OAB\\) is an inscribed angle that intercepts the arc BC, the measure of the central angle \\(\\angle BOC\\) that" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.44, + 0.599, + 0.449 + ], + "angle": 0, + "content": "intercepts the same arc BC is twice the measure of the inscribed angle \\(\\angle OAB\\) :" + }, + { + "type": "equation", + "bbox": [ + 0.396, + 0.45, + 0.602, + 0.458 + ], + "angle": 0, + "content": "\\[\n\\angle B O C = 2 \\times \\angle O A B = 2 \\times 5 4 ^ {\\circ} = 1 0 8 ^ {\\circ}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.46, + 0.573, + 0.469 + ], + "angle": 0, + "content": "- Now, in triangle OBC, the sum of the angles is \\(180^{\\circ}\\). So, we can find \\(\\angle C\\):" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.47, + 0.589, + 0.479 + ], + "angle": 0, + "content": "\\(\\angle OBC + \\angle OCB + \\angle BOC = 180^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.43, + 0.481, + 0.566, + 0.488 + ], + "angle": 0, + "content": "\\(90^{\\circ} + \\angle OCB + 108^{\\circ} = 180^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.43, + 0.49, + 0.567, + 0.498 + ], + "angle": 0, + "content": "\\(\\angle OCB = 180^{\\circ} - 198^{\\circ} = 18^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.5, + 0.709, + 0.509 + ], + "angle": 0, + "content": "However, since \\(\\angle OCB\\) is not directly given as an option, we need to re-evaluate the problem with the co" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.51, + 0.777, + 0.53 + ], + "angle": 0, + "content": "interpretation of the angle \\(\\angle C\\) which is actually the angle subtended by the arc BC at the circumference, which is half the central angle:" + }, + { + "type": "equation", + "bbox": [ + 0.413, + 0.53, + 0.595, + 0.546 + ], + "angle": 0, + "content": "\\[\n\\angle C = \\frac {1}{2} \\times \\angle B O C = \\frac {1}{2} \\times 1 0 8 ^ {\\circ} = 5 4 ^ {\\circ}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.544, + 0.399, + 0.554 + ], + "angle": 0, + "content": "Therefore, the measure of \\(\\angle C\\) is \\(54^{\\circ}\\)." + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.555, + 0.298, + 0.564 + ], + "angle": 0, + "content": "The answer is: B" + }, + { + "type": "title", + "bbox": [ + 0.413, + 0.58, + 0.584, + 0.591 + ], + "angle": 0, + "content": "VLAA-Thinker-Qwen2.5VL-7B" + }, + { + "type": "title", + "bbox": [ + 0.228, + 0.599, + 0.269, + 0.606 + ], + "angle": 0, + "content": "" + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.608, + 0.769, + 0.629 + ], + "angle": 0, + "content": "To solve this problem, we need to use the properties of tangents and circles. Since AB is tangent to circle O at point B, the radius OB is perpendicular to the tangent AB at point B. This means that angle OBA is \\(90^{\\circ}\\)." + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.638, + 0.769, + 0.658 + ], + "angle": 0, + "content": "that angle A is \\(36^{\\circ}\\), we can find angle AOB by subtracting angle A from \\(180^{\\circ}\\) (since the sum of angles in triangle AOB must be \\(180^{\\circ}\\)):" + }, + { + "type": "equation", + "bbox": [ + 0.433, + 0.659, + 0.573, + 0.668 + ], + "angle": 0, + "content": "\\[\n\\angle A O B = 1 8 0 ^ {\\circ} - 3 6 ^ {\\circ} - 9 0 ^ {\\circ} = 5 4 ^ {\\circ}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.669, + 0.751, + 0.689 + ], + "angle": 0, + "content": "Since angle AOB is a central angle subtending arc BC, and angle C is an angle subtending the same arc BC, the measure of angle C is half the measure of angle AOB:" + }, + { + "type": "equation", + "bbox": [ + 0.456, + 0.69, + 0.549, + 0.705 + ], + "angle": 0, + "content": "\\[\n\\angle C = \\frac {1}{2} \\times 5 4 ^ {\\circ} = 2 7 ^ {\\circ}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.228, + 0.704, + 0.413, + 0.713 + ], + "angle": 0, + "content": "Therefore, the correct answer is D: \\(27^{\\circ}\\)" + }, + { + "type": "title", + "bbox": [ + 0.228, + 0.715, + 0.272, + 0.723 + ], + "angle": 0, + "content": "" + }, + { + "type": "title", + "bbox": [ + 0.228, + 0.725, + 0.279, + 0.733 + ], + "angle": 0, + "content": "" + }, + { + "type": "title", + "bbox": [ + 0.229, + 0.735, + 0.26, + 0.743 + ], + "angle": 0, + "content": "D:27°" + }, + { + "type": "title", + "bbox": [ + 0.229, + 0.745, + 0.283, + 0.754 + ], + "angle": 0, + "content": "" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.799, + 0.825, + 0.829 + ], + "angle": 0, + "content": "Figure 13: A case from MathVerse testmini (sample index 20). Markdowns are rendered for illustration purpose. Wrong reasoning paths are colored red." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "image_caption", + "bbox": [ + 0.478, + 0.104, + 0.514, + 0.114 + ], + "angle": 0, + "content": "GeoQA" + }, + { + "type": "image", + "bbox": [ + 0.389, + 0.116, + 0.616, + 0.161 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.386, + 0.163, + 0.603, + 0.208 + ], + "angle": 0, + "content": "Question: Given a semicircular figure with the diameter represented by AB and intersecting chords AD and BC at point P, where angle DPB measures 60.0 degrees, calculate the value of tan(angle ADC) under the condition that D denotes the midpoint of the arc BC. Choices: A \\(\\frac{1}{2}\\), B \\(\\frac{1}{3}\\), C \\(\\frac{\\sqrt{3}}{3}\\), D \\(\\sqrt{3}\\). \n\\(\\bullet\\) Original Answer: C" + }, + { + "type": "title", + "bbox": [ + 0.389, + 0.212, + 0.436, + 0.221 + ], + "angle": 0, + "content": "VL-Thinking" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.228, + 0.603, + 0.25 + ], + "angle": 0, + "content": "Okay. So I'm trying to solve this geometry problem. Let me see if I can visualize it correctly. There's a semicircle with diameter AB. Points C and D are on the curved edge, and chords AC and BD intersect at point P. Below AB, there's the center O. The angle DPB is 60 degrees, and D is the midpoint of arc BC. I need to find the value of tan \\((\\angle A.D.C)\\)." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.254, + 0.606, + 0.28 + ], + "angle": 0, + "content": "First, I should probably draw this out to get a better sense. I imagine a semicircle with \\( AB \\) as the diameter. \\( O \\) is the center, so \\( AO = OB \\). Points C and D are on the semicircle, with \\( D \\) being the midpoint of arc BC. That means \\( BD \\) is equal to arc DC. Since it's a semicircle, the total arc from \\( B \\) to \\( C \\) is 180 degrees, so each arc is equal to one angle. The sum of these two angles is \\( 2\\pi \\), so \\( AB \\) is equal to \\( CD \\). This is the exact measure of the semicircle. So if BC is some arc, \\( D \\) is halfway along it. But it isn't sure about the exact measure yet." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.284, + 0.605, + 0.3 + ], + "angle": 0, + "content": "Chords \\( \\mathrm{AC} \\) and \\( \\mathrm{DB} \\) intersect at \\( P \\), and angle DPB is 60 degrees. I remember that when two chords intersect, the products of the segments are equal. So \\( A'P = PC = B'P \\). But I'm not sure if that's directly useful." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.304, + 0.605, + 0.325 + ], + "angle": 0, + "content": "Since \\( D \\) is the midpoint of arc BC, it might have some special properties. In a semicircle, the midpoint of an arc would be equidistant from B and C. Also, angles subtended by the same arc are equal. Maybe I can use some properties of cyclic quadrilaterals here, but since it's a semicircle, all points lie on the circumference, so ABCD is a cyclic quadrilateral." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.329, + 0.601, + 0.344 + ], + "angle": 0, + "content": "Wait, but AICD is a quadrilateral inscribed in a semicircle, which makes it a cyclic quadrilateral. In such cases, opposite angles sum to 180 degrees. But I'm not sure if that's applicable here because we're dealing with a circle." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.348, + 0.603, + 0.369 + ], + "angle": 0, + "content": "Let me think about the angles. Angle DPB is 60 degrees. Since \\( \\mathrm{P} \\) is the intersection of chords AC and BD, maybe I can use the property that the angle between two chords is equal to half the sum of the measures of the intercepted arcs. So angle DPB would be half the sum of arcs DB and AC. But I'm not sure about the exact arcs involved." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.373, + 0.606, + 0.39 + ], + "angle": 0, + "content": "Alternatively, maybe I can use triangle properties. Since \\( \\mathbb{E} \\) is the midpoint of arc \\( BC \\), triangle \\( DBC \\) might be a triangle in the sense that it has a right triangle and a left triangle in the sense of a semicircle as a right triangle. So angle \\( BDC \\) to degree \\( 90^{\\circ} \\)" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.393, + 0.598, + 0.404 + ], + "angle": 0, + "content": "Wait, that's an important point. Since D is on the semicircle, triangle BDC is right-angled at D. So angle BDC is 90 degrees. That might help in finding angle ADC." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.409, + 0.606, + 0.42 + ], + "angle": 0, + "content": "1. need to find \\(\\tan (\\angle DC)\\). Let's denote angle ADC as \\(\\theta\\). So \\(\\tan (\\theta)^{\\circ}\\) opposite/adjacent. To find this, I might need to find the lengths of the sides of triangle ADC or use some trigonometric identities." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.424, + 0.606, + 0.445 + ], + "angle": 0, + "content": "Let me consider the coordinates approach. Let's place the semicircle on a coordinate system with center \\(O\\) at \\((0,0)\\), and \\(AB\\) as the diameter along the \\(x\\)-axis from \\((-r,0)\\) to \\((r,0)\\), where \\(r\\) is the radius. Then point \\(D\\), being the midpoint of arc BC, would be somewhere on the semicircle. But without knowing the exact position of \\(C\\), this might be tricky." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.449, + 0.605, + 0.464 + ], + "angle": 0, + "content": "Alternatively, maybe I can use power of a point. Since \\( P \\) is the intersection of chords \\( \\mathrm{AC} \\) and \\( \\mathrm{BD} \\), the power of point \\( P \\) with respect to the circle is equal for both chords. So \\( PA:PC = PB:PD \\). But that without specific choice of \\( P \\) would be a contradiction." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.469, + 0.606, + 0.492 + ], + "angle": 0, + "content": "Wait, maybe I can assign some variables. Let's let the radius of the semicircle be \\( \\mathrm{r} \\). Then \\( \\mathrm{AB} = 2\\pi \\). Let me assume \\( r = 1 \\) for simplicity, so \\( \\mathrm{AB} = 2\\pi \\). Then \\( \\mathrm{O} \\) is at \\( (0,0) \\), \\( \\mathrm{A} \\) is at \\( (-1,0) \\), \\( \\mathrm{B} \\) is at \\( (1,0) \\). Let me place point C somewhere on the semicircle, say at an angle \\( \\theta \\) from the \\( x \\)-axis. Then point D, being the midpoint of arc BC, would be at an angle \\( \\frac{\\theta}{2} + 180^{\\circ} \\) from the \\( x \\)-axis, but I'm not sure." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.499, + 0.607, + 0.514 + ], + "angle": 0, + "content": "Alternatively, since \\(D\\) is the midpoint of arc BC, the arc BOD equals DC. So if I consider the central angles, angle BOD equals angle DOC. Since the total arc BC is some measure, say 2σ, then each arc BOD and its corresponding arc BOD are equal." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.519, + 0.607, + 0.538 + ], + "angle": 0, + "content": "Wait, maybe I can use the fact that angle DPB is 60 degrees. Using the intersecting chords theorem, angle DPB is equal to half the sum of the measures of arcs DB and AC. So angle DPB = \\(\\frac{1}{2} (\\angle ABC + \\angle AC) = 60\\) degrees. Therefore, any \\(\\angle DB + \\angle AC = 170^{\\circ}\\) degrees." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.544, + 0.601, + 0.559 + ], + "angle": 0, + "content": "But I'm not sure about the measure of arc AC. Alternatively, maybe I can express arc AC in terms of other arcs. Since the total semicircle is 180 degrees, Arc AB is 180 degrees. Arc AC is part of that, so arc AB is a part of the semicircle." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.564, + 0.607, + 0.58 + ], + "angle": 0, + "content": "Alternatively, since D is the midpoint of arc BC, \\(\\mathrm{BC} = \\mathrm{BD}\\) are \\(\\mathrm{AC} = \\mathrm{DC}\\). Let's denote arc \\(B D = a\\angle DC = x\\). Then \\(a\\angle B C = 2x\\). Since the total semicircle is 180 degrees, are AB = 180 degrees, so are AC = 180 degrees." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.584, + 0.59, + 0.608 + ], + "angle": 0, + "content": "Now, from the intersecting chords theorem, angle \\(\\mathrm{DPB} = \\frac{1}{2} (\\arctan DB + \\arctan AC) = 60\\) degrees. So \\(\\frac{1}{2} (x + (180 - 2x)) = 60\\). Simplifying, \\(\\frac{1}{2} (180 - x) = 60\\). Multiply both sides by \\(2 \\div 180 - x = 120\\). Therefore, \\(x = 60\\) degrees." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.61, + 0.607, + 0.631 + ], + "angle": 0, + "content": "So are BD = arc DC = 60 degrees, and arc BC = 120 degrees. Then arc AC = 180 - 120 = 60 degrees. Now, I need to find tangent angle ADC. Let's consider triangle ADC. Since D is on the semicircle, angle ADC is an inscribed angle. The measure of angle ADC is half the measure of arc AC. Since arc AC is 60 degrees, angle ADC = 30 degrees." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.635, + 0.606, + 0.669 + ], + "angle": 0, + "content": "Therefore, \\(\\tan (\\text{angle ADC}) = \\tan (30\\text{ degrees}) = \\frac{1}{\\sqrt{3}} = \\frac{\\sqrt{3}}{3}\\). \nWait, but let me double-check. If angle ADC is 30 degrees, then \\(\\tan (30)\\) is indeed \\(\\frac{\\sqrt{3}}{3}\\). So the answer should be option C: \\(\\frac{\\sqrt{3}}{3}\\)." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.675, + 0.466, + 0.685 + ], + "angle": 0, + "content": "The value of tan(angle ADC) is \\(\\frac{\\sqrt{3}}{3}\\)" + }, + { + "type": "title", + "bbox": [ + 0.39, + 0.69, + 0.45, + 0.696 + ], + "angle": 0, + "content": "Step-by-Step Explanation:" + }, + { + "type": "title", + "bbox": [ + 0.389, + 0.7, + 0.459, + 0.706 + ], + "angle": 0, + "content": "1. Understanding the Problem:" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.71, + 0.551, + 0.73 + ], + "angle": 0, + "content": "- We have a semicircle with diameter AB and center O.\n- Points C and D are on the semicircle, with D being the midpoint of arc BC.\n- Chords AC and BD intersect at point P, forming an angle DPB of 60 degrees.\n- We need to find tan \\((\\angle ACD)\\)." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.735, + 0.603, + 0.75 + ], + "angle": 0, + "content": "2. Key Properties and Theorems: - Intersecting Chords Theorem: The measure of an angle formed by two intersecting chords is half the intersection area." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.751, + 0.593, + 0.76 + ], + "angle": 0, + "content": "- Midpoint of Arc: Since D is the midpoint of arc BC, arcs BD and DC are equal. \n- Inscribed Angle Theorem: An angle inscribed in a circle is half the measure of its intercepted arc." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.765, + 0.52, + 0.804 + ], + "angle": 0, + "content": "3. Applying the Intersecting Chords Theorem: \n- Angle DPB is formed by chords BD and AC, so \\(\\angle DPB = \\frac{1}{2}\\angle (AB + arc AC)\\) \n- Given \\(\\angle DPB = 60^{\\circ}\\), we have \n\\[ 60^{\\circ} = \\frac{1}{2} (\\arccos DB + \\arccos AC) \\] \n\\(\\arccos DB + \\arccos AC = 120^{\\circ} \\]" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.809, + 0.55, + 0.85 + ], + "angle": 0, + "content": "4. Determining Are Measures: Since D is the midpoint of arc BC, arcs BD and DC are equal. Let each be x. Thus, are BC = 2x. The total semicircle is \\(180^{\\circ}\\), so: arc AC = 180° - arc BC = 180° - 2x. Substituting into the earlier equation: arc AC = 180° - arc BC = 180° - 2x. Therefore, arcs BD and DC are each 60° and arc AC is 60°." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.856, + 0.506, + 0.88 + ], + "angle": 0, + "content": "5. Finding Angle ADC: \n- Angle ADC is an inscribed angle intercepting arc AC. \n- By the Inscribed Angle Theorem: \n\\(\\angle ADC = \\frac{\\pi}{2} \\times \\text{arc AC} = \\frac{1}{2} \\times 60^{\\circ} = 30^{\\circ}\\)" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.885, + 0.449, + 0.914 + ], + "angle": 0, + "content": "6. Calculating the Tangent \\(\\tan (30^{\\circ}) = \\frac{\\sqrt{3}}{4}\\) Answer \\(\\frac{\\sqrt{3}}{4}\\) (Option C)" + }, + { + "type": "image_caption", + "bbox": [ + 0.389, + 0.932, + 0.691, + 0.947 + ], + "angle": 0, + "content": "A VL-Thinking sample from GeoQA170K." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.457, + 0.133, + 0.534, + 0.148 + ], + "angle": 0, + "content": "Synthesis" + }, + { + "type": "title", + "bbox": [ + 0.313, + 0.185, + 0.376, + 0.197 + ], + "angle": 0, + "content": "Input Image" + }, + { + "type": "image", + "bbox": [ + 0.434, + 0.155, + 0.55, + 0.224 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.306, + 0.234, + 0.477, + 0.244 + ], + "angle": 0, + "content": "- Question: What is the function's equation?" + }, + { + "type": "text", + "bbox": [ + 0.306, + 0.244, + 0.701, + 0.296 + ], + "angle": 0, + "content": "- Original Answer: Step 1: Substitute the point \\((\\frac{\\pi}{10},5)\\) into the function: \\(5 = a\\cdot \\sin (-5\\cdot \\frac{\\pi}{10})\\) . Step 2: Simplify the equation: \\(5 = a\\cdot \\sin (\\frac{-\\pi}{2})\\) . Step 3: Since \\(\\sin (\\frac{-\\pi}{2}) = -1\\) , we have: \\(5 = -a\\) . Step 4: Solve for \\(a:a = -5\\) Step 5: Substitute the value of a back into the original function: \\(y = -5\\cdot \\sin (-5\\cdot x)\\) . Answer: \\(y = -5\\cdot \\sin (-5\\cdot x)\\)" + }, + { + "type": "title", + "bbox": [ + 0.315, + 0.305, + 0.392, + 0.316 + ], + "angle": 0, + "content": "VL-Thinking" + }, + { + "type": "text", + "bbox": [ + 0.31, + 0.322, + 0.652, + 0.35 + ], + "angle": 0, + "content": " First, I recognize that the function given is \\(y = a\\cdot \\sin (-5x)\\) , and it passes through the point \\(\\left(\\frac{\\pi}{10},5\\right)\\)" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.351, + 0.586, + 0.36 + ], + "angle": 0, + "content": "To find the value of \\( a \\), I'll substitute the coordinates of the point into the equation." + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.367, + 0.497, + 0.395 + ], + "angle": 0, + "content": "Substituting \\( x = \\frac{\\pi}{10} \\) and \\( y = 5 \\), the equation becomes: \\( 5 = a \\cdot \\sin \\left(-5 \\cdot \\frac{\\pi}{10}\\right) \\)" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.404, + 0.468, + 0.413 + ], + "angle": 0, + "content": "Simplifying the argument of the sine function:" + }, + { + "type": "equation", + "bbox": [ + 0.311, + 0.413, + 0.367, + 0.428 + ], + "angle": 0, + "content": "\\[\n- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.437, + 0.394, + 0.445 + ], + "angle": 0, + "content": "So, the equation now is:" + }, + { + "type": "equation", + "bbox": [ + 0.311, + 0.447, + 0.375, + 0.462 + ], + "angle": 0, + "content": "\\[\n5 = a \\cdot \\sin \\left(\\frac {\\pi}{2}\\right)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.471, + 0.507, + 0.49 + ], + "angle": 0, + "content": "I know that \\(\\sin \\left(-\\frac{\\pi}{2}\\right) = -1\\) so substituting this value in: \\(5 = a\\cdot (-1)\\)" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.499, + 0.359, + 0.507 + ], + "angle": 0, + "content": "Solving for \\(a\\):" + }, + { + "type": "equation", + "bbox": [ + 0.311, + 0.508, + 0.356, + 0.516 + ], + "angle": 0, + "content": "\\[\n5 = a \\cdot (- 1)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.526, + 0.534, + 0.535 + ], + "angle": 0, + "content": "Finally, substituting \\( a = -5 \\) back into the original function gives:" + }, + { + "type": "equation", + "bbox": [ + 0.311, + 0.535, + 0.377, + 0.542 + ], + "angle": 0, + "content": "\\[\ny = - 5 \\cdot \\sin (- 5 x)\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.311, + 0.544, + 0.344, + 0.551 + ], + "angle": 0, + "content": "\\[\n< / t h i n k >\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.311, + 0.559, + 0.681, + 0.577 + ], + "angle": 0, + "content": "To determine the equation of the function \\( y = a \\cdot \\sin(-5x) \\) that passes through the point \\( \\left( \\frac{\\pi}{10}, 5 \\right) \\), follow these steps:" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.586, + 0.458, + 0.594 + ], + "angle": 0, + "content": "1. Substitute the Point into the Equation:" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.594, + 0.46, + 0.604 + ], + "angle": 0, + "content": "Plug \\( x = \\frac{\\pi}{10} \\) and \\( y = 5 \\) into the equation:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.605, + 0.396, + 0.62 + ], + "angle": 0, + "content": "\\[\n5 = a \\cdot \\sin \\left(- 5 \\cdot \\frac {\\pi}{1 0}\\right)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.631, + 0.481, + 0.639 + ], + "angle": 0, + "content": "2. Simplify the Argument of the Sine Function:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.64, + 0.373, + 0.653 + ], + "angle": 0, + "content": "\\[\n- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.654, + 0.408, + 0.662 + ], + "angle": 0, + "content": "So, the equation becomes:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.663, + 0.382, + 0.68 + ], + "angle": 0, + "content": "\\[\n5 = a \\cdot \\sin \\left(- \\frac {\\pi}{2}\\right)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.69, + 0.421, + 0.698 + ], + "angle": 0, + "content": "3. Evaluate the Sine Function:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.7, + 0.379, + 0.716 + ], + "angle": 0, + "content": "\\[\n\\sin \\left(- \\frac {\\pi}{2}\\right) = - 1\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.718, + 0.452, + 0.726 + ], + "angle": 0, + "content": "Substituting this back into the equation:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.727, + 0.36, + 0.735 + ], + "angle": 0, + "content": "\\[\n5 = a \\cdot (- 1)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.744, + 0.362, + 0.752 + ], + "angle": 0, + "content": "4. Solve for \\( a \\):" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.754, + 0.348, + 0.76 + ], + "angle": 0, + "content": "\\[\na = - 5\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.77, + 0.414, + 0.778 + ], + "angle": 0, + "content": "5. Write the Final Equation:" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.779, + 0.489, + 0.787 + ], + "angle": 0, + "content": "Substitute \\(a = -5\\) back into the original function:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.788, + 0.383, + 0.796 + ], + "angle": 0, + "content": "\\[\ny = - 5 \\cdot \\sin (- 5 x)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.805, + 0.459, + 0.813 + ], + "angle": 0, + "content": "6. Simplify the Negative Sign (Optional):" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.814, + 0.528, + 0.822 + ], + "angle": 0, + "content": "Since \\(\\sin (-\\theta) = -\\sin (\\theta)\\), the equation can also be written as:" + }, + { + "type": "equation", + "bbox": [ + 0.318, + 0.823, + 0.37, + 0.83 + ], + "angle": 0, + "content": "\\[\ny = 5 \\cdot \\sin (5 x)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.312, + 0.84, + 0.364, + 0.847 + ], + "angle": 0, + "content": "Final Answer:" + }, + { + "type": "equation", + "bbox": [ + 0.312, + 0.848, + 0.373, + 0.857 + ], + "angle": 0, + "content": "\\[\ny = - 5 \\sin (- 5 x)\n\\]" + }, + { + "type": "image_caption", + "bbox": [ + 0.238, + 0.877, + 0.757, + 0.895 + ], + "angle": 0, + "content": "Figure 15: A VL-Thinking sample from Math PUMA (subset Synthesis)." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.426, + 0.178, + 0.601, + 0.198 + ], + "angle": 0, + "content": "CLEVR-MATH" + }, + { + "type": "title", + "bbox": [ + 0.203, + 0.268, + 0.308, + 0.285 + ], + "angle": 0, + "content": "Input Image" + }, + { + "type": "image", + "bbox": [ + 0.408, + 0.214, + 0.64, + 0.335 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.189, + 0.352, + 0.566, + 0.369 + ], + "angle": 0, + "content": "- Question: Subtract all balls. How many objects are left?" + }, + { + "type": "text", + "bbox": [ + 0.189, + 0.38, + 0.336, + 0.395 + ], + "angle": 0, + "content": "- Original Answer: 3" + }, + { + "type": "list", + "bbox": [ + 0.189, + 0.352, + 0.566, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.204, + 0.412, + 0.335, + 0.432 + ], + "angle": 0, + "content": "VL-Thinking" + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.436, + 0.529, + 0.459 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.46, + 0.817, + 0.518 + ], + "angle": 0, + "content": " Okay, let's see. The user is asking to subtract all balls and determine how many objects are left. First, I need to figure out which objects are considered \"balls\" in the image. The image shows a large blue metal sphere. A sphere is a three-dimensional circle, which is essentially a ball shape. So the blue metal sphere is definitely a ball." + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.533, + 0.822, + 0.59 + ], + "angle": 0, + "content": "Next, the surrounding objects are two small red metal cylinders and a single small brown metal cylinder in the foreground. Cylinders are not balls; they have a different shape—circular bases and straight sides. So those would not be subtracted. The question says to subtract all balls, so only the sphere (ball) needs to be removed." + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.606, + 0.811, + 0.65 + ], + "angle": 0, + "content": "Originally, there's the sphere (1 object), two red cylinders (2 objects), and one brown cylinder (1 object). Total objects before subtraction: \\( 1 + 2 + 1 = 4 \\). Subtracting the one ball leaves 3 objects: the two red cylinders and the brown cylinder." + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.665, + 0.81, + 0.707 + ], + "angle": 0, + "content": "Wait, but I should make sure there are no other balls mentioned. The image only refers to the sphere as a ball-like object. The cylinders are clearly not balls. So subtracting the sphere leaves three cylinders. The answer should be 3." + }, + { + "type": "text", + "bbox": [ + 0.208, + 0.71, + 0.263, + 0.722 + ], + "angle": 0, + "content": "" + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.738, + 0.833, + 0.782 + ], + "angle": 0, + "content": "In the scene, there is one large blue sphere (a ball-shaped object) centrally positioned. The remaining objects consist of two small red cylinders flanking it symmetrically and a single brown cylinder in the foreground. Removing the sphere leaves only the three cylindrical objects visible." + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.797, + 0.44, + 0.811 + ], + "angle": 0, + "content": "Answer: There are 3 objects remaining." + }, + { + "type": "text", + "bbox": [ + 0.302, + 0.836, + 0.694, + 0.853 + ], + "angle": 0, + "content": "Figure 16: A VL-Thinking sample from CLEVR-Math." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.459, + 0.134, + 0.573, + 0.157 + ], + "angle": 0, + "content": "ArxivQA" + }, + { + "type": "title", + "bbox": [ + 0.202, + 0.225, + 0.308, + 0.244 + ], + "angle": 0, + "content": "Input Image" + }, + { + "type": "image", + "bbox": [ + 0.388, + 0.167, + 0.506, + 0.23 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.168, + 0.631, + 0.233 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.388, + 0.231, + 0.506, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.508, + 0.232, + 0.631, + 0.297 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.186, + 0.31, + 0.837, + 0.391 + ], + "angle": 0, + "content": "- Question: In figure (b), how does the repetition rate affect the ratio of 9th to 11th harmonic intensity for the \\(1030\\mathrm{nm}\\) wavelength? Choose from the following options A) The ratio increases as the repetition rate increases. B) The ratio decreases as the repetition rate increases. C) The ratio remains constant as the repetition rate increases. D) The ratio fluctuates unpredictably with changes in repetition rate." + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.399, + 0.339, + 0.414 + ], + "angle": 0, + "content": "- Original Answer: B" + }, + { + "type": "list", + "bbox": [ + 0.186, + 0.31, + 0.837, + 0.414 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.202, + 0.432, + 0.335, + 0.453 + ], + "angle": 0, + "content": "VL-Thinking" + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.467, + 0.257, + 0.478 + ], + "angle": 0, + "content": "" + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.456, + 0.531, + 0.481 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.481, + 0.841, + 0.51 + ], + "angle": 0, + "content": "Okay, so I'm looking at figure (b) here, and I need to figure out how the repetition rate affects the ratio of 9th to 11th harmonic intensity for the \\(1030\\mathrm{nm}\\) wavelength. Let me start by understanding what's in the image." + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.525, + 0.838, + 0.584 + ], + "angle": 0, + "content": "First, the image is a scatter plot with the x-axis being the repetition rate in kHz and the y-axis being the ratio of 9th to 7th and 11th harmonic intensity on a logarithmic scale. There are two sets of data points: red dots and blue squares. The red dots are probably representing the \\(1030\\mathrm{nm}\\) wavelength since in panel (a), the red curve was labeled as \\(1030\\mathrm{nm}\\). The blue squares would then correspond to the \\(515\\mathrm{nm}\\) wavelength." + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.599, + 0.834, + 0.643 + ], + "angle": 0, + "content": "The question specifically asks about the \\(1030\\mathrm{nm}\\) wavelength, so I'll focus on the red dots. Looking at the scatter plot, as the repetition rate increases from left to right on the x-axis, I need to see what happens to the y-axis values for the red dots." + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.659, + 0.844, + 0.718 + ], + "angle": 0, + "content": "From what I can see, as the repetition rate goes up, the red dots trend downward. That means the ratio of 9th to 11th harmonic intensity is decreasing. There's no indication of an increase or a constant ratio; instead, it's a clear downward trend. The red dots don't seem to fluctuate unpredictably either; they follow a decreasing pattern as repetition rate increases." + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.733, + 0.816, + 0.762 + ], + "angle": 0, + "content": "So, putting it all together, the ratio decreases as the repetition rate increases for the \\(1030\\mathrm{nm}\\) wavelength. That would correspond to option B." + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.763, + 0.262, + 0.776 + ], + "angle": 0, + "content": "" + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.792, + 0.824, + 0.822 + ], + "angle": 0, + "content": "The image shows that the ratio of 9th to 11th harmonic intensity for the \\(1030\\mathrm{nm}\\) wavelength decreases as the repetition rate increases." + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.837, + 0.599, + 0.851 + ], + "angle": 0, + "content": "Answer: B) The ratio decreases as the repetition rate increases." + }, + { + "type": "image_caption", + "bbox": [ + 0.318, + 0.877, + 0.679, + 0.895 + ], + "angle": 0, + "content": "Figure 17: A VL-Thinking sample from ArxivQA." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.347, + 0.049 + ], + "angle": 0, + "content": "Preprint. Under review." + }, + { + "type": "title", + "bbox": [ + 0.399, + 0.133, + 0.58, + 0.153 + ], + "angle": 0, + "content": "ALLaVA- LAION" + }, + { + "type": "title", + "bbox": [ + 0.203, + 0.216, + 0.3, + 0.233 + ], + "angle": 0, + "content": "Input Image" + }, + { + "type": "image", + "bbox": [ + 0.373, + 0.165, + 0.603, + 0.281 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.19, + 0.293, + 0.744, + 0.322 + ], + "angle": 0, + "content": "- Question: Considering the proximity of Preston Store to other landmarks, what could be its significance in the local area?" + }, + { + "type": "text", + "bbox": [ + 0.191, + 0.328, + 0.813, + 0.422 + ], + "angle": 0, + "content": "- Original Answer: The Preston Store's significance in the local area, as suggested by the map, could be quite central given its prominent placement and the fact that it has a dedicated Facebook page marker. It appears to be situated near major roads and commercial entities like KFC, Harvey Norman, and Dan Murphy's, which indicates it is likely a well-known establishment, possibly a retail store or a social venue, that serves as a local landmark or gathering place. The specific type of store cannot be determined from the map alone, but its central location among various retailers and close to a park suggests it may cater to a wide range of customers and possibly holds a key place in the local community." + }, + { + "type": "list", + "bbox": [ + 0.19, + 0.293, + 0.813, + 0.422 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.204, + 0.442, + 0.325, + 0.461 + ], + "angle": 0, + "content": "VL-Thinking" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.474, + 0.261, + 0.485 + ], + "angle": 0, + "content": "" + }, + { + "type": "image", + "bbox": [ + 0.475, + 0.467, + 0.504, + 0.487 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.212, + 0.488, + 0.753, + 0.528 + ], + "angle": 0, + "content": "Okay, let's see. The user is asking about the significance of Preston Store in the local area based on its proximity to other landmarks. First, I need to visualize the map shown. The Preston Store is labeled prominently with a red marker and a Facebook 'f' logo, so it's probably a key location." + }, + { + "type": "text", + "bbox": [ + 0.212, + 0.542, + 0.777, + 0.596 + ], + "angle": 0, + "content": "Looking at the nearby landmarks, there's KFC, Harvey Norman, Dan Murphy's, and Max Kirwan Mazda. Those are all big-name retailers and services. Then there's Target, Myer Northland, and Anaconda Preston. These are major stores, which suggests the area is a commercial hub. The store being near these places means it's likely in a busy shopping district, making it a central spot for locals." + }, + { + "type": "text", + "bbox": [ + 0.212, + 0.609, + 0.77, + 0.703 + ], + "angle": 0, + "content": "Also, there's TW Blake Park and Darebin Creek Forest Park nearby. Green areas mean that Preston Store might be a place people visit before or after outdoor activities. Maybe it's a convenience store or a community spot where people gather. The mix of retail, dining, and parks indicates it's a multifunctional area, so Preston Store could serve both daily shopping needs and act as a meeting point. The Facebook marker suggests they have an online presence, maybe promoting local events or services, enhancing community engagement. Since it's near major roads like Bell St and Murray Rd, accessibility is good, increasing its importance as a central landmark." + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.705, + 0.264, + 0.715 + ], + "angle": 0, + "content": "" + }, + { + "type": "text", + "bbox": [ + 0.212, + 0.73, + 0.79, + 0.852 + ], + "angle": 0, + "content": "Preston Store appears centrally positioned within a bustling commercial and recreational hub, making it a likely focal point for local activity. Its prominent placement near major retailers like Target, Myer Northland, and Harvey Norman suggests it could serve as a practical anchor for shoppers—perhaps offering essentials or services that complement these larger stores. The proximity to fast food (KFC) and leisure destinations (Darebin Creek Forest Park, TW Blake Park) positions it as a potential pitstop for families or individuals transitioning between errands, dining, and outdoor activities. The red Facebook marker implies community engagement, possibly acting as an information node or local business with social media visibility. Its accessibility via intersecting roads like Bell St and Murray Rd further underscores its role as a convenient landmark in the area's daily flow." + }, + { + "type": "image_caption", + "bbox": [ + 0.29, + 0.877, + 0.707, + 0.895 + ], + "angle": 0, + "content": "Figure 18: A VL-Thinking sample from ALLaVA-LAION." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "25" + } + ] +] \ No newline at end of file diff --git a/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_origin.pdf b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..afb7793b6abf7250f2f4c29d5a6561be7f39ede5 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/29f6f006-1646-44f2-b6fb-f930a57c3738_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ba74c9374cd57fc9eca3f2e9fc6a1162cdedbb3716a05e6b6d91a7f1a4b2454c +size 5821062 diff --git a/data/2025/2504_11xxx/2504.11468/full.md b/data/2025/2504_11xxx/2504.11468/full.md new file mode 100644 index 0000000000000000000000000000000000000000..76917af3d39d57f079f3b87567577bbecde3be2e --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/full.md @@ -0,0 +1,747 @@ +# SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models + +Hardy Chen $^{2*}$ , Haoqin Tu $^{1*}$ , Fali Wang $^{3}$ , Hui Liu $^{4}$ , Xianfeng Tang $^{4}$ , Xinya Du $^{2}$ , Yuyin Zhou $^{1}$ , Cihang Xie $^{1}$ + +1 University of California, Santa Cruz 2 University of Texas at Dallas +3 The Pennsylvania State University 4 Amazon Research + +Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/ +7B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B +3B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B +Dataset: https://huggingface.co/datasets/UCSC-VLAA/VLAA-Thinkin + +# Abstract + +This work revisits the dominant supervised fine-tuning (SFT) then reinforcement learning (RL) paradigm for training Large Vision-Language Models (LVLMs), and reveals a key finding: SFT can significantly undermine subsequent RL by inducing "pseudo reasoning paths" imitated from expert models. While these paths may resemble the native reasoning paths of RL models, they often involve prolonged, hesitant, less informative steps, and incorrect reasoning. To systematically study this effect, we introduce VLAA-Thinking, a new multimodal dataset designed to support reasoning in LVLMs. Constructed via a six-step pipeline involving captioning, reasoning distillation, answer rewrite and verification, VLAA-Thinkings comprises high-quality, step-by-step visual reasoning traces for SFT, along with a more challenging RL split from the same data source. Using this dataset, we conduct extensive experiments comparing SFT, RL and their combinations. Results show that while SFT helps models learn reasoning formats, it often locks aligned models into imitative, rigid reasoning modes that impede further learning. In contrast, building on the Group Relative Policy Optimization (GRPO) with a novel mixed reward module integrating both perception and cognition signals, our RL approach fosters more genuine, adaptive reasoning behavior. Notably, our model VLAA-Thinker, based on Qwen2.5VL 3B, achieves top-1 performance on Open LMM Reasoning Leaderboard1 among 4B scale LVLMs, surpassing the previous state-of-the-art by $1.8\%$ . We hope our findings provide valuable insights in developing reasoning-capable LVLMs and can inform future research in this area. + +# 1 Introduction + +Large Language Models (LLMs) with strong reasoning capability have recently gained wide attention with the emergence of OpenAI's o1/o3 and Deepseek-R1 (Guo et al., 2025; Jaech et al., 2024). A common practice to empower models with reasoning abilities comprises two steps: supervised fine-tuning (SFT) on reasoning data, followed by reinforcement learning (RL) to further boost performance. This successful paradigm has inspired efforts to extend these strengths beyond textual domains to Large Vision-Language Models (LVLMs) (Peng et al., 2025; Chen et al., 2025a; Deng et al., 2025b; Shen et al., 2025; Yang et al., 2025b). + +![](images/26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg) +Figure 1: Examples from LVLMs trained with different strategies for reasoning Left: response from a model trained with SFT, showing pseudo reasoning traces and a number of pseudo self-reflective cues (i.e., aha-moments) imitated from R1. Right: response from a model trained with RL, showing native reasoning ability and authentic aha-moments emerged from RL training. Wrong reasoning steps are colored red and aha-moments are highlighted. + +In this work, we take a step further and examine whether the widely adopted "SFT then RL" paradigm similarly benefits the development of reasoning-capable LVLMs. Specifically, we ask: 1) What are the distinct effects of SFT and RL in multimodal reasoning? and 2) Is this two-stage paradigm truly necessary for reasoning in LVLMs? To systematically explore these questions, we curate VLAA-Thinkinq, the first comprehensive and high-quality image-text reasoning dataset explicitly designed to support both SFT and RL. Unlike prior datasets, VLAA-Thinkinq includes detailed, step-by-step reasoning traces derived from the R1-style "think-then-speak" intermediate reasoning. We construct a dedicated SFT split featuring multimodal chain-of-thought (CoT) examples suitable for visual instruction tuning, alongside a more challenging RL split curated from the same source encourage deeper and more adaptive reasoning behaviors. To effectively transfer reasoning capabilities from text-only models to the multimodal domain, we construct our dataset through a six-stage pipeline: metadata collection, image captioning, R1-based distillation, answer rewriting, verification, and split curation. Specifically, we input image captions and visual questions into DeepSeek-R1 to generate initial reasoning traces. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier, resulting in high-quality multimodal reasoning dataset for SFT and RL. + +Next, we carefully ablate the role of SFT, RL and their combinations in multimodal reasoning using our VLAA-Thinking dataset. To better understand the role of SFT, we perform a detailed analysis, systematically examining the impact of SFT data type (e.g., with and without the self-reflective "aha moments"), dataset scale, and model capacity. To explore the potential of RL in the vision-language context, we design a novel mixed reward function within the Group Relative Policy Optimization (GRPO) (Shao et al., 2024) framework that involves both perception and cognition rewards to incentivize the model to produce well-reasoned answers. Specifically, our mixed reward signal blends 2 types of reward with 5 types of functions. For rule-based questions, there are functions for digit, multiple-choice, math and bounding box outputs. For open-ended questions, we adopt a competent reward model, XComposer-2.5-RM (Zang et al., 2025), along with a reference-based reward method to score an answer. We then closely investigate the effects of different reward functions, base models, and the interaction between SFT and GRPO to further optimize reasoning capabilities. + +Our extensive experiments comparing SFT and RL reveal several noteworthy insights. First, we probe the contribution of SFT and RL in multimodal reasoning: while SFT improves performance on standard tasks over the base model, it falls short in enhancing complex reasoning. Merely imitating an expert's thinking through SFT often induces "pseudo reasoning paths", a superficial reasoning pattern which may contain "pseudo aha moments" (superficial self-reflective cues), as illustrated in Figure 1. We show that these imitated reasoning patterns can hinder genuine reasoning advancement, i.e., $47\%$ relative performance drop on 7B models. This observation is also in line with recent studies highlighting the need for + +![](images/cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg) +Figure 2: Data generation pipeline. We first generate initial reasoning traces by feeding detailed captions and visual questions into DeepSeek-R1. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier. The resulting data is split into VLAA-Thinking-SFT and VLAA-Thinking-RL. + +feedback and exploration signals to drive advanced reasoning behaviors (Peng et al., 2025). Additionally, our ablations show that for rule-based rewards, math and multiple-choice are more beneficial than others, and that a combination of both rule-based and open-ended rewards yields the best performance. + +While prior work suggests that SFT followed by RL in LVLMs offers the best of both worlds (Guo et al., 2025; Yang et al., 2025b; Deng et al., 2025b)—first mimicking good reasoning format, then refining via RL feedback, we find that applying SFT before GRPO hurts performance on aligned models, with an average $12.7\%$ drop, and even a smaller scale SFT leads to a similar decline. Regarding model size, larger models cannot immune from the degeneration brought by SFT, as 7B models share almost the same performance drop with their smaller counterparts. Finally, examining the training procedure, we observe little correlation between response length, reward, and performance—SFT-ed models get higher initial rewards and longer response yet underperform RL-trained ones, contrasting with the previous observation that better models usually produce longer answers with higher RL reward (Guo et al., 2025; Peng et al., 2025). + +To summarize, while SFT helps unaligned models follow instructions, it limits exploration during RL by promoting imitative reasoning. In contrast, learning directly from reward signals yields more effective and adaptable thinking behavior. Empirically, direct RL proves superior. Our model, VLAA-Thinker-Qwen2.5VL-3B, achieves the top-1 performance on the Open LMM Reasoning Leaderboard among 4B-scale LVLMs, surpassing the previous state-of-the-art by $1.8\%$ . Our case study further emphasizes these gains with more concise, effective reasoning traces presented in model answers. + +# 2 The VLAA-Thinking Dataset + +To systematically evaluate the "SFT then RL" paradigm for developing reasoning capabilities in LVLMs, we construct VLAA-Thinking, a dataset that consists of two parts: 1) VLAA-Thinking-SFT which captures step-by-step reasoning grounded in visual inputs for SFT, and 2) VLAA-Thinking-RL which contains challenging samples designed specifically for RL. Our data generation pipeline is designed to transfer reasoning capabilities from a powerful text-only model to the multimodal domain through a structured, multi-stage process. The entire pipeline, as illustrated in Figure 2, consists of six key components: + +#1: Metadata Collection We collect metadata from 9 vision-language datasets featuring either closed- or open-ended questions. Specifically, we sample data containing unique images from CLEVR-Math (Lindström & Abraham, 2022), Math PUMA (Zhuang et al., 2024), ArxivQA (Li et al., 2024a), DocVQA (Mathew et al., 2021), VizWiz (Gurari et al., 2018), and ALLaVA (Chen et al., 2024a), and process them through our complete data pipeline. In addition, we directly adopt COCO and VisualGenome data from LLaVA-CoT (Xu et al., + +
NameData Type#Ori.#Pipeline#Final SFT#Final RL
Collected from Distilling R1
CLEVR-MathClosed-end35,00028,0185,9232,000
GeoQA170KClosed-end---6,499
Math PUMAClosed-end30,00026,67219,2586,696
ArxivQAClosed-end54,39951,34834,6041,000
DocVQAClosed-end10,1948,2064,8971,000
VizWizClosed-end20,5236,5284,2661,000
ALLaVA-LAIONOpen-end47,06618,12310,4963,000
Collected from LLaVA-CoT
COCOClosed-end3,0003,0008,7272,000
VisualGenomeClosed-end3,0003,00038,2422,000
TotalClosed- & Open-end203,182144,895126,41325,195
+ +Table 1: Data statistics of VLAA-Thinking. We present the original volume of metadata (#Ori.), the data size after the distillation pipeline (#Pipeline), the size of sampled examples for SFT (#Final SFT) and RL (#Final RL), respectively. Note that we only use GeoQA170K with verifiable answers for the RL split. + +2024). An exception is GeoQA170K (Gao et al., 2023), which we include only in the RL split due to persistent hallucination issues during captioning. Detailed statistics are in Table 1. + +#2: Visual Input and Additional Information Each sample begins with an image, question, and its corresponding answer. To bridge the gap between the visual modality and language reasoning, we resort to GPT-4o to generate a detailed image caption describing the content in structured and semantically rich language (detailed prompts in Appendix A.1). During this process, we take full advantage of the provided knowledge in the data beyond just the GPT captions. In detail, we provide these dataset-specific information: (1) CLEVR-Math: Instructions for synthesizing the image from CLEVR (Johnson et al., 2017); (2) Math PUMA: Textual description of math problems in the image from the dataset itself. (3) ALLaVA-LAION: Fine-grained and verified GPT-4V captions from the original dataset. + +#3: Reasoning Answer Distillation We utilize a strong text-only reasoning model: DeepSeek-R1 to generate thinking rationale and final answers. The model is provided with the image caption, the visual question, and additional information from certain datasets. It responds using a structured reasoning format that is between and tags and contains a sequence of logical steps leading to the final answer. + +#4: Answer and Rewriting To enhance consistency and eliminate modality-specific artifacts, the raw reasoning answers generated by R1 are passed through a rewriting module (i.e., GPT-3.5-turbo (Brown et al., 2020) in our experiment). This module removes unnecessary phrases (e.g., references to "caption"), and ensures the answer adheres to a clean, instruction-following format based on the image. We further filter out samples with the sentence length gap larger than 15 words to ensure minimum modifications in this process. + +#5: Automated Verification To assess whether the generated reasoning answers is correct regarding the groundtruth answer, we implement an automated verifier. This verifier compares the rewritten reasoning answer to the groundtruth of the visual question, determining whether the outputs are correct or incorrect. Only the examples that are verified as correct are retained as the final training data. + +#6: Curating Splits for SFT and RL The last step of our data generation pipeline is to curate two non-overlapped training sets for SFT and RL, respectively. Inspired by Chu et al. (2025) which finds that RL is particularly effective in encouraging deeper reasoning on challenging cases, we aim to select more challenging samples for the RL split. To achieve this, we propose using the presence of self-reflective cues (i.e., the "aha moments") in the distilled answers as an indicator of a sample's difficulty level (details are in Appendix A.2). For the SFT split, we exclude samples with "aha moments", as such samples may be too complex to fully imitate through finetuning. On the other hand, the harder examples with "aha moments" form the RL split, on which reward-driven learning may be better suited to elicit meaningful reflection. + +Following these steps, our dataset adheres to the format {image, question, reasoning, answer}, with reasoning and answer generated by DeepSeek-R1. We construct a high-quality multimodal reasoning dataset with 126,413 samples for SFT and 25,195 samples for RL. + +# 3 Investigating The Role of SFT for Multimodal Reasoning + +SFT has become the de-facto approach for training LLMs. Recent studies aim to extend the strengths of SFT to empower LVLMs with reasoning abilities by training on specially formatted data. Unlike prior methods that incorporate standalone textual descriptions of images (Xu et al., 2024), this direct strategy enables the model to develop grammatically coherent reasoning abilities, allowing it to "think before speak." In recent vision-language reasoning systems, there is a notable trend of complementing or even replacing SFT with RL to enhance complex reasoning abilities (Peng et al., 2025; Deng et al., 2025b). We follow this line and take it further by probing the underlying cause of this shift. Our finding suggests that self-reflection thinking ("aha moments") from the SFT process is overloaded with excessive and irrelevant reasoning, becomes what we call "pseudo aha moments" and ultimately hurts performance. In this section, we explore 1) the model perform when SFT-ed on data with aha-moments and 2) the effect of SFT data size to model performance. + +# 3.1 Experiment Setup + +To investigate the effect of SFT training with aha-moments, we collect the distilled VQA pairs whose distilled answers contain aha-moments, totaling 55K samples. To study the effect of SFT with different sizes of training sets, we use perplexity (PPL) filtering to obtain a smaller SFT dataset. Specifically, we compute the PPL score of each answer in VLAA-Thinking-SFT-126K using Qwen2-VL-2B and Qwen2.5-VL-3B, and sort all samples by their average PPL scores over the two models. We keep the samples with high PPLs to obtain a total of 25K SFT samples, as these harder examples push models to learn more effectively and efficiently (Ankner et al., 2024; Li et al., 2024b). + +We select four models for training: Qwen2VL (2B and 7B)2, Qwen2.5VL (3B and 7B). Each model is trained with a batch size of 128 and their vision encoder frozen. We evaluate model performance with VLMEvalKit (Duan et al., 2024) on 6 math reasoning benchmarks hosted in Open LMM Reasoning Leaderboard, which contains 6 challenging math reasoning benchmarks including MathVista (Lu et al., 2024), MathVision (Wang et al., 2024b), MathVerse (Zhang et al., 2024), DynaMath (Zou et al., 2024), WeMath (Qiao et al., 2024), LogicVista (Xiao et al., 2024). We present the percentage of relative performance drop of different models in Figure 3. Detailed training and evaluation setup are in Appendix B. + +# 3.2 Findings + +SFT with Aha Moments Degrades Performance. We present results for the Qwen-2.5-VL-3B model trained under three different settings using our SFT data in Table 2. Somewhat unexpectedly, the model fine-tuned on 55K examples containing the aha moment performs significantly worse than the base model, with an average drop of $10.5\%$ . This suggests that chasing the aha moment through SFT is unreliable, as SFT merely teaches the model to mimic rather than to generalize genuine self-reflective reasoning. Additionally, the table shows evidence that straightforward SFT using multimodal reasoning data also degrades performance, e.g., we observe an average drop of $10.2\%$ and $19.1\%$ when fine-tuning on 25K and 126K samples, respectively. + +
ModelAvg.
Qwen2.5-VL-3B31.8
w/ aha-55K21.3
w/ 25K21.6
w/ 126K12.7
+ +Table 2: Average performance over 6 reasoning benchmarks of Qwen-2.5-VL-3B SFT-ed on different sizes of SFT data and on data containing only examples with aha moment (aha-55K). + +![](images/50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg) + +![](images/bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg) + +![](images/08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg) +Figure 3: Delta percentage performance change of different models trained with supervised fine-tuning (SFT) only. + +![](images/468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg) + +More SFT Data, Worse Performance. Counterintuitively, even a five-fold increase in the supervised dataset (from 25K to 126K instances) often fails to improve performance and in most cases actually harms it. Models trained with 126K SFT samples suffer a relative performance drop of over average $14\%$ compared to their 25K-trained counterparts over all model and task settings (e.g., 25K: $32.2\%$ vs. 126K: $47.0\%$ ). This degradation is particularly evident on complex datasets such as WeMath and DynaMath, where the relative decrease reaches as high as $97.9\%$ over Qwen2.5-VL models on average. Even on mid-difficulty benchmarks like MathVision and MathVerse (i.e., model performance is relatively higher), the 126K SFT models underperform, with an average drop of $28.6\%$ compared to the untrained model over 4 models. These results suggest that simply scaling up SFT data does not boost generalizable reasoning skills of LLMs, and may instead suppress the model's capacity on various reasoning tasks. + +Larger Models Are Not Immune to SFT Degeneration. Contrary to expectations, scaling up model size does not mitigate the adverse effects of excessive SFT, under heavier SFT they exhibit pronounced drops on the most challenging evaluations. A larger 7B models fine-tuned on 126K examples experience drops nearly identical in magnitude to their smaller 2B or 3B counterparts: $47.2\%$ for smaller models vs. $45.4\%$ for larger models compared with base models. Notably, despite the strong performance of Qwen2.5-VL-7B model (e.g., $68.1\%$ on MathVista), it also suffers an average decline of $52.5\%$ on these reasoning tasks when SFT-ed with 126K data. + +These findings highlight the limitations of SFT as a tool for enhancing multimodal reasoning. While it may be suitable for learning reasoning formats, it falls short of the expectations for fostering inherent self-reflection. Rather than simply scaling supervision data, our results suggest for a shift toward more advanced training methods like RL. + +# 4 Improving Multimodal Reasoning with Mixed Rewards + +The previous section shows that SFT is insufficient to transfer R1's ability to LVLMs on vision-language tasks. Therefore, it is crucial to seek for other post-training methods to elicit the reasoning ability of LVLMs. Since reinforcement learning (RL) is effective in enhancing reasoning ability (Yang et al., 2025a; Kirk et al., 2023), and GRPO has recently been proven more effective and efficient on textual math reasoning task (Shao et al., 2024; Jahn et al., + +![](images/e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg) +Figure 4: The proposed Mixed Reward Module for GRPO training, comprising 2 reward formats (rule-based and open-ended) and 5 types of verifiable rewards (digit, MCQ, math, IoU and general reasoning). + +2025) than other methods like PPO (Schulman et al., 2017), it motivates us to apply GRPO training for vision-language reasoning tasks. + +Mathematically, let $q$ be a query and $\{o_i\}_{i=1}^G$ be a group of $G$ sampled outputs from the old policy model $\pi_{old}$ , GRPO maximizes the following objective: + +$$ +\mathcal {J} _ {\mathrm {G R P O}} (\theta) = \mathbb {E} _ {q, \{o _ {i} \} \sim \pi_ {\theta_ {\mathrm {o l d}}}} \left[ \frac {1}{G} \sum_ {i = 1} ^ {G} \frac {1}{| o _ {i} |} \sum_ {t = 1} ^ {| o _ {i} |} \min \left(r _ {t} (\theta) \hat {A} _ {i, t}, \operatorname {c l i p} (r _ {t} (\theta), 1 - \epsilon , 1 + \epsilon) \hat {A} _ {i, t}\right) \right] - \beta D _ {\mathrm {K L}} \left(\pi_ {\theta} \| \pi_ {\mathrm {r e f}}\right) +$$ + +where $\hat{A}_{i,t}$ is the estimated advantage, $\beta$ is the KL penalty coefficient and $\pi_{\theta}, \pi_{\theta_{\mathrm{old}}}, \pi_{\mathrm{ref}}$ are current, old, and reference policies, respectively. + +# 4.1 GRPO with Mixed Reward + +To better adapt GRPO to multimodal reasoning, in addition to adopting the rule-based reward similar to the textual GRPO training, it is necessary to consider additional characteristics introduced by the vision modality. Inspired by (Fu et al., 2024) which benchmarks LVLMs by perception and cognition (reasoning), we propose a mixed reward framework for GRPO training, as illustrated in Figure 4. The reward system comprises five types of verifiable rewards with two formats, encompassing both visual perception and visual reasoning tasks. + +Rule-Based Reward There are 4 types of rule-based rewards, including digit matching, option letter matching and math expression matching and Intersection over Union for bounding boxes. For digit matching, the model is asked to answer counting questions from CLEVR-Math whose groundtruths are a single digit. For option letter matching, the model is required to answer an MCQ. For math expression matching, the model is asked to solve a math question, such as finding a function expression or the volume of a cone, and output its answers in latex format. We use the Math Verify3 package to check for correctness. For bounding boxes, the model is prompted to output the bounding box coordinates of an object in the image, and an IoU score (range from 0 to 1) is computed as reward. + +Open-ended Reward We leverage InternLM-XComposer2.5-Reward (Zang et al., 2025) as the scorer, denoted as $S_{\theta}(\cdot)$ , which takes an image and a QA pair as input, and outputs a reward score. Following Muhtar et al. (2025), the reward for a sampled response $\hat{y}$ is computed as $R_{open} = 1 - \exp(-\left(S_{\theta}(\hat{y}) - S_{\theta}(y)\right) \times \beta)$ if $f_{\theta}(\hat{y}) > f_{\theta}(y)$ else 0, where $S_{\theta}(y)$ is the score of the reference answer, and $\beta$ is a smoothing hyperparameter. Note that the open-ended reward is normalized into [0,1], which is consistent with the scale of rule-based reward, partially avoiding reward hacking during training. + +Implicit Format Reward Unlike Guo et al. (2025) and its subsequent works which use a separate reward term for format correctness, we discard this format reward term and make the format reward supersede all other rewards. Namely, whenever we are unable to extract a valid response from the raw answer, the reward would be 0. We empirically find that by specifying the output format in system prompt, the model is able to generate answers with correct formats through trials and errors. The implicit format reward design simplifies the reward computation. Further, it may yield better performance since less restriction is imposed on the exploration process (Zeng et al., 2025). + +# 4.2 Effect of SFT on GRPO Training + +
GRPO BackboneMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
Qwen2VL-7B-Inst59.619.833.915.230.536.032.5
Qwen2VL-7B-Inst+SFT43.714.719.03.211.127.319.8(-39%)
Qwen2VL-7B-Base59.318.233.511.423.236.230.7
Qwen2VL-7B-Base+SFT49.516.425.06.420.432.725.7(-16%)
+ +Table 3: Benchmark results of models trained with GRPO on different backbones. SFT+GRPO yields performance degradation, indicating that SFT is NOT compatible with GRPO in multimodal reasoning. + +SFT is NOT Compatible with GRPO in Multimodal Reasoning. Although we reveal in Section 3 that SFT alone leads to a performance drop in multimodal reasoning, it is still unclear whether SFT plays a crucial role in aiding GRPO, like the golden key in DeepSeek-R1. We experiment with different backbones for GRPO training. Specifically, we adopt Qwen2VL-7B-Base and Qwen2VL-7B-Inst, and perform SFT on them with 25K samples, followed by GRPO training. + +From Table 3, we observe that models undergoing SFT before GRPO training perform worse than those trained with GRPO alone, presenting an average drop of $8.9\%$ across Qwen2VL-Base and Qwen2VL-Inst compared to their non-SFT counterparts. We also find that SFT introduces more degradation to instruction models than to base models without instruction-following capabilities. For instance, Qwen2VL-Inst suffers a $7.7\%$ more drop in performance than Qwen2VL-Base post-SFT, suggesting that SFT can compromise the instruction-following ability crucial for effective GRPO training. Taken together, these results suggest that SFT is currently incompatible with GRPO in the context of multimodal reasoning, impairing both base and instruction-tuned LVLMs. + +![](images/0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg) +Figure 5: Impact of SFT with 5K and 10K samples before GRPO. Smaller-sized SFT datasets still jeopardizes GRPO performance. + +Smaller SFT Dataset Still Jeopardizes GRPO Performance. Since we reveal in Section 3.2 that more SFT data yields lower performance, we try to investigate the effect of downsizing + +the SFT training set. Following the PPL filtering method in Section 3, we select top-10K and top-5K samples from VLAA-Thinking-SFT-126K to finetune Qwen2.5-VL-3B, followed by GRPO training. For comparison, we also conduct GRPO training without SFT. + +We present the performance of Qwen2.5-VL-3B on each task in Figure 5. A clear observation is that applying SFT on 5K examples prior to GRPO significantly degrades performance compared to using GRPO alone, showing an average drop of $13.5\%$ . Moreover, scaling up SFT data to 10K yields only a marginal improvement of $0.8\%$ . These results further support that SFT before GRPO can hinder the model's learning capability. + +![](images/fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg) +Figure 6: Response length (left) and reward (right) during training. Training with only GRPO yields the lowest response length and yet the highest final reward and best benchmark performance, indicating that response length, reward, and model performance are NOT necessarily related. + +![](images/3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg) + +Response Length, Reward, and Model Performance are NOT Necessarily Related. Prior work in RL suggests that longer responses often correlate with better reasoning and higher RL rewards (Guo et al., 2025; Zhou et al., 2025; Chen et al., 2025b). However, our findings in Figure 6 reveal that response length and reward in GRPO are not reliable indicators of reasoning ability. For instance, the 10K SFT+GRPO model produces the longest responses but ends up with lower rewards than the GRPO-only model ( $\sim 0.35$ vs. $\sim 0.5$ ) after training. Similarly, the 5K SFT+GRPO variant shows moderate length and reward but still underperforms on downstream tasks. + +Interestingly, both SFT-ed models start with higher initial rewards (e.g., $\sim 0.20$ for $10\mathrm{K}$ SFT+GRPO vs. $\sim 0.05$ for GRPO-only), which is likely due to their early learning experience with supervision since SFT and GRPO data share the same distribution. However, they exhibit limited reward improvement during training, whereas the GRPO-only model rapidly surpasses them. These trends further reveal that SFT solely provides a higher "lower bound" for RL training, yet it may lower the "upper bound" since the reasoning SFT data constrains the model's exploration paths. Therefore, reasoning is a native emerging ability that is more likely to be developed through RL, not SFT. While SFT-ed models may appear to reason, their behavior is closer to pattern imitation — a form of pseudo-reasoning that lacks the generalizable reasoning skills. + +# 4.3 GRPO Training without SFT + +Following the findings in the previous section, we directly conduct GRPO training which yields four models: VLAA-Thinker-Qwen2-VL-2B, VLAA-Thinker-Qwen2-VL-7B, VLAA-Thinker-Qwen2.5-VL-3B, VLAA-Thinker-Qwen2.5-VL-7B. We also train on a base model of Qwen2-VL-7B, and the resulting model is named VLAA-Thinker-Qwen2-7B-Zero. + +We sample 4 times for each query with temperature 0.8. Rollout and training batch size are set as 512 and 256, respectively. We train our model for 1 episode (outer loop) and 1 epoch per episode (inner loop) on $8^{*}\mathrm{H}100$ GPUs with 49 steps. More details of training setup are in Appendix C.1. We follow the identical evaluation setup as described in Section 3.1. We present evaluation results in Table 4 and list our main findings below. + +Direct GRPO Training Boosts Model Performance. Models trained directly with GRPO on the VL-Thinking RL consistently outperform their respective base models. For example, + +
ModelMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
4B-scale LVLMs
Qwen2-VL-2B48.016.117.53.810.826.620.5
Qwen2.5-VL-3B61.221.931.213.222.940.331.8
VLM-R1-Math-030562.721.932.213.030.040.533.4
VLAA-Thinker-Qwen2-2B43.614.819.03.412.630.420.3
VLAA-Thinker-Qwen2.5-3B61.024.436.418.233.838.535.4
7B-scale LVLMs
LLaVA-OneVision-7B58.618.319.39.020.933.326.6
InternLM-XComposer2.564.017.816.28.214.134.725.8
InternVL2.5-8B64.517.022.89.423.536.028.9
InternVL2-8B58.320.020.49.220.233.626.9
Qwen2-VL-7B61.619.225.411.022.333.328.8
Qwen2.5-VL-7B68.125.441.121.836.247.940.1
VLAA-Thinker-Qwen2-7B-Zero59.318.233.511.423.236.230.7
VLAA-Thinker-Qwen2-7B59.619.833.915.230.536.032.5
VLAA-Thinker-Qwen2.5-7B68.026.448.222.441.548.542.5
+ +Table 4: Evaluation results of 6 math reasoning benchmarks on Open LMM Leaderboard. VLAA-Thinker models significantly outperform baselines and other models. + +at the 7B scale, two models trained on VL-Thinking achieve an average score of $36.5\%$ , marking a $2.0\%$ improvement over their base model average of $34.5\%$ . Moreover, our best-performing 7B model consistently outperforms other similarly sized LVLMs (e.g., InternVL2.5-8B, LLaVA-OneVision-7B), while our 3B model surpasses the recent reasoning-focused model, VLM-R1-Math, by $1.1\%$ on average. These results once again demonstrate that GRPO significantly enhances reasoning capabilities, even without additional SFT. + +Stronger Instruction Model Leads to Better Post-GRPO Reasoning. An interesting observation is that model with better instruction tuning generally performs better. The instruction-aligned Qwen2-7B model, after GRPO, outperforms its unaligned counterpart VLAA-Thinker-Qwen2-7B-Zero by $1.8\%$ on average $(31.3\%$ vs. $29.5\%)$ , with notable gains on harder tasks like DynaMath $(5.0\%)$ and WeMath $(3.1\%)$ . Moreover, using a stronger instruction-tuned model for GRPO further improves across both 3B and 7B scales — VLAA-Thinker-Qwen2.5 surpasses VLAA-Thinker-Qwen2 by $12.6\%$ on average, confirming that higher-quality instruction tuning leads to more effective post-RL reasoning. + +![](images/19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg) +Figure 7: Heatmap of different "aha" expressions generated by VLAA-Thinker models during training. + +Emergence of Authentic Aha Moments. To show that our GRPO training can induce authentic self-reflection process, we plot the frequency of four aha expressions ("alternatively", "double-check", "i should check", "wait") for each VLAA-Thinker model in Figure 7. Since all models are trained using GRPO without being SFT-ed on distilled reasoning paths, all aha moments emerge from the GRPO process, demonstrating the model's self-developed reflective ability. Another finding is that the number of aha moments is not directly correlate with overall model performance, as more aha moments do not necessarily translate to higher reasoning scores. + +# 4.4 Ablations + +
RowMethodDigitMathMCQIoUOpen-endedMViMVsWM
0Qwen2.5-VL-3B21.931.222.9
1w/o Digit23.534.628.8
2w/o Math21.432.727.0
3w/o MCQ21.533.918.4
4w/o IoU22.835.330.0
5All Rule-Based22.234.930.1
6Mixed Reward24.436.433.8
+ +Mixed Reward. To demonstrate the effectiveness of our mixed reward strategy, we perform an ablation study on Qwen2.5-VL-3B by selectively disabling individual reward components and evaluating performance across three math reasoning benchmarks, as shown in Table 5. The model trained with Mixed Reward achieves the best overall performance, with an average improvement of $6.2\%$ over the baseline, demonstrating the effectiveness of our reward design. Using only rule-based rewards (All Rule-Based) also yields consistent gains (e.g., $29.1\%$ vs. $25.3\%$ baseline), while removing specific components—especially MCQ (w/o MCQ) leads to substantial drops. These results highlight the critical role of rule-based rewards in GRPO for multimodal reasoning tasks. + +Hyperparameters To search for better hyperparameters, we experiment with different learning rates (LR) and KL divergence settings on Qwen2.5-VL-3B. We start with a basic setting where LR anneals to zero following a cosine scheduler with no KL constraint. Results are shown in Table 6. LR1 uses a minimum learning rate of $8e^{-7}$ with warmup ratio $10\%$ , whereas LR2 uses a minimum learning rate of $5e^{-7}$ with warmup ratio $3\%$ . Since LR2 performs slightly better than LR1, we compare two KL settings on top of LR2. KL1 uses an initial KL of $1e^{-2}$ and a target KL of $5e^{-3}$ , whereas KL2 uses an initial KL coefficient of $1e^{-3}$ and a target KL of $5e^{-4}$ . We find that introducing KL constraints significantly improves the performance on MathVerse and DynaMath by $1.1\%$ and + +$3.2\%$ , respectively, and that using a smaller KL can encourage the model to evolve. + +Table 5: Ablation of Mixed Reward on MVi: MathVision, MVs: MathVerse and WM: WeMath. A combination of rule-based and open-ended rewards yields significant boost in performance. + +
SettingsMVsDMLV
Basic31.715.038.5
Learning Rate
+ LR133.016.038.1
+ LR233.515.638.3
KL Coef.
+ KL134.418.837.8
+ KL235.818.639.2
+ +Table 6: Ablation on LR and KL Coef. on MVs: MathVerse, DM: DynaMath and LV: LogicVista. + +# 4.5 Case Study + +We provide an example showcasing the improvement of VLAA-Thinker over the original model in Appendix C.3. Qwen2.5VL-7B generates lengthy response with wrong reasoning traces. Although it outputs some self-reflective patterns like "re-evaluate", the final answer remains wrong. On the other hand, VLAA-Thinker-Qwen2.5VL-7B is able to reason on the right track, with only a minor mistake near the end of its thinking process. Nevertheless, the high-level idea and reasoning process is overall correct, demonstrating strong capability of solving complex reasoning tasks. + +# 5 Related Work + +Vision-Language Reasoning Models. Recent advances in vision-language (VL) reasoning models build on the success of text-only reasoning systems like OpenAI's o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). Earlier VL methods, such as few-shot prompting and chain-of-thought (CoT), offered limited visual reasoning (Brown et al., 2020; Wei et al., 2022). Recently, LLaVA-CoT (Xu et al., 2024) adopts an SFT approach a 4-step structured outputs to enhance model's reasoning, yet lacking flexibility due to its rigid output format. More recently, newer models incorporate more natural reasoning traces and reinforcement learning. VLM-R1 (Shen et al., 2025) and R1-V (Chen et al., 2025a) align multimodal LLMs using step-by-step reasoning and policy optimization. VisualThinker-R1-Zero (Zhou et al., 2025) goes further by training a 2B model via pure RL from scratch, achieving emergent inner reasoning. LMM-R1 (Peng et al., 2025) transfers CoT skills from language to vision through staged RL. Vision-R1 (Huang et al., 2025) combines reasoning trace supervision and RL with correctness and format rewards to train a strong 7B VL reasoner. Different from these concurrent works, we propose a high-quality multimodal reasoning dataset with R1-like reasoning traces for both SFT and RL, and provide a comprehensive study on training paradigms. + +Reward Modeling in Reinforcement Learning. Reward design plays a central role in reasoning-oriented RL. While model-based rewards offer flexibility (Kwon et al., 2023; Wang et al., 2024a; Gao et al., 2024), they are prone to reward hacking (Eisenstein et al., 2023; Chen et al., 2024b; Fu et al., 2025), making them risky for reasoning tasks. Recent VL models prefer binary correctness rewards (Huang et al., 2025; Zhou et al., 2025) for math or QA tasks, directly reinforcing accurate outputs. Others apply rule-based rewards, enforcing structured formats or logic chains (Liu et al., 2025; Deng et al., 2025a). While recent studies deploy strong reward models for enhancing LVLM reasoning, they are grounded by specific domains or simpler tasks (Muhtar et al., 2025; Tu et al., 2025). GRPO-style methods use relative ranking within output batches to guide optimization without value critics (Shao et al., 2024; Guo et al., 2025). Our Mix Reward objective combines the model-based and rule-based reward in four complex rewarding scenarios, yielding better performance than existing approaches. + +# 6 Conclusion + +This work provides a comparative analysis on the effectiveness of leveraging SFT or RL (more specifically, GRPO) to build LVLM with strong reasoning ability. We show by extensive experiments that distilling reasoning data and performing SFT is a deficient way to transfer reasoning ability across modalities. We then extend our dataset to GRPO training with a proposed mixed reward objective, which yields substantial improvement over the baseline models. We present several findings regarding combining SFT and GRPO and the correlation between reward, respond length, and final performance. These results indicate that reasoning is a native emerging ability acquired from RL, rather than SFT, which merely equips the model with 'pseudo-reasoning' ability. + +# Acknowledgement + +We thank the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs. + +# References + +Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L Leavitt, and Mansheej Paul. Perplexed by perplexity: Perplexity-based data pruning with small reference models. arXiv preprint arXiv:2405.20541, 2024. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 2020. +Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for lite vision-language models. arXiv preprint arXiv:2402.11684, 2024a. +Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025a. Accessed: 2025-02-02. +Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319, 2024b. +Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, et al. An empirical study on eliciting and improving r1-like reasoning models. arXiv preprint arXiv:2503.04548, 2025b. +Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161, 2025. +Huilin Deng, Ding Zou, Rui Ma, Hongchen Luo, Yang Cao, and Yu Kang. Boosting the generalization and reasoning of vision language models with curriculum reinforcement learning. arXiv preprint arXiv:2503.07065, 2025a. +Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352, 2025b. +Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 11198-11201, 2024. +Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D'Amour, DJ Dvi-jotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244, 2023. +Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. URL https://arxiv.org/abs/2306.13394. +Jiayi Fu, Xuandong Zhao, Chengyuan Yao, Heng Wang, Qi Han, and Yanghua Xiao. Reward shaping to mitigate reward hacking in rlhf. arXiv preprint arXiv:2502.18770, 2025. +Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370, 2023. +Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, and Yi Wu. On designing effective rl reward at training time for llm reasoning. arXiv preprint arXiv:2410.15115, 2024. + +Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. +Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608-3617, 2018. +Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143, 2024. +Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749, 2025. +Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. +Afrar Jahin, Arif Hassan Zidan, Yu Bao, Shizhe Liang, Tianming Liu, and Wei Zhang. Unveiling the mathematical reasoning in deepseek models: A comparative study of large language models. arXiv preprint arXiv:2503.10573, 2025. +Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901-2910, 2017. +Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the effects of rlhf on llm generalisation and diversity. arXiv preprint arXiv:2310.06452, 2023. +Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023. +Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024a. +Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. arXiv preprint arXiv:2402.00530, 2024b. +Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358, 2022. +Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025. +Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR), 2024. +Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2200-2209, 2021. + +Dilxat Muhtar, Enzhuo Zhang, Zhenshi Li, Feng Gu, Yanglangxing He, Pengfeng Xiao, and Xueliang Zhang. Quality-driven curation of remote sensing vision-language data via learned scoring models. arXiv preprint arXiv:2503.00743, 2025. +Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536, 2025. +Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284, 2024. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. +Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. +Haozhan Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. https://github.com/om-ai-lab/VLM-R1, 2025. Accessed: 2025-02-15. +Haoqin Tu, Weitao Feng, Hardy Chen, Hui Liu, Xianfeng Tang, and Cihang Xie. Vilbench: A suite for vision-language process reward modeling. arXiv preprint arXiv:2503.20271, 2025. +Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080, 2024a. +Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id=QWTCxMpPA. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022. +Yijia Xiao, Edward Sun, Tianyu Liu, and Wei Wang. Logicvista: Multimodal llm logical reasoning benchmark in visual contexts. arXiv preprint arXiv:2407.04973, 2024. +Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024. URL https://arxiv.org/abs/2411.10440. +Haoyan Yang, Ting Hua, Shangqian Gao, Binfeng Xu, Zheng Tang, Jie Xu, Hongxia Jin, and Vijay Srinivasan. Dynamic noise preference optimization for llm self-improvement via synthetic data. arXiv preprint arXiv:2502.05400, 2025a. +Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615, 2025b. +Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, et al. Internlm-xcomposer2. 5-reward: A simple yet effective multi-modal reward model. arXiv preprint arXiv:2501.12368, 2025. +Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892, 2025. + +Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision, pp. 169-186. Springer, 2024. +Hengguang Zhou, Xinui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero's" aha moment" in visual reasoning on a 2b non-sft model. arXiv preprint arXiv:2503.05132, 2025. +Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640, 2024. +Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836, 2024. + +# A Data Generation + +# A.1 Prompt + +We show the prompts for captioning (Figure 8), R1 answer distillation (Figure 9), rewriting (Figure 10) and verification (Figure 11). + +# Prompt for Captioning + +You are a vision-language model generating a highly detailed caption of an image. +Summarize the environment or setting (indoor/outdoor, surroundings). +Describe visible objects, people, or structures (colors, shapes, textures, positions). +Transcribe all text verbatim. For equations, use LaTeX when appropriate but do not solve or interpret them. +If structured data (tables, charts) appears, use Markdown formatting for clarity. +Include labels, annotations, brand names, or logos, if any, otherwise don't mention them. +Note any visible expressions or emotional tone factually, without speculation. +## Maintain a logical order: from overall context to finer details. +## Provide only the caption without extra context or commentary. +## Be unbiased and faithful in your description, using natural language and Markdown only where relevant. + +Figure 8: Prompt for captioning with GPT-4-Turbo. + +# Prompt for Distillation + +You have advanced visual perception abilities and can directly analyze images as if you are looking at them. You will be provided with detailed visual descriptions, but you should interpret them as if they represent your actual visual understanding rather than text-based captions. + +Answer questions as if you are visually perceiving the scene, not reading a caption. Provide natural and confident responses about objects, relationships, and numerical or spatial reasoning. Use a descriptive, visually grounded tone, avoiding mention of text. + +Never mention that you are reading text or captions. Infer spatial relationships, numerical properties, and logical conclusions based on the perceived "image." If information is unclear, respond naturally as if there are visual limitations (e.g., 'It appears that...'). + +Caption: {caption} + +Question: {question} + +Figure 9: Prompt for distillation with Deepseek-R1. + +# A.2 Aha-Moment Filtering + +We use the following list of keywords to identify aha moments: wait, again, double-check, hmm, mistake, alternatively, check, i should confirm. All answers are matched with the logic: has_aha = any([aha in text.lower() for aha in ahas]). + +# A.3 Sample Demonstration for VLAA-Thinking-SFT-126K + +We show several examples from VLAA-Thinking-SFT-126K in Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18. + +# Prompt for Rewriting + +You will receive a snippet of text that references a "description" or "caption" of an image. Your task is to produce a **nearly identical** version of that text with **minimal** changes, focusing on the following: + +1. **Replace references to "description", "caption" and "rationale"* with wording that references *** the image."** +- For example, "The description says..." could become "The image shows..." +- "The caption suggests..." could become "The image suggests..." +- "Based on the rationale..." could become "Based on the image..." +- Make sure the replacement sounds natural but does $^{**}$ not\*\* otherwise change the meaning. + +2. **Preserve all line breaks, punctuation, and spacing** as much as possible, and make **no additional edits** outside of these replacements. +3. You should only output the rewritten content. + +Here is the input: {input} + +Figure 10: Prompt for answer rewriting with GPT-4-Turbo. + +# Prompt for Verification + +You are a fair evaluator. + +You will be given a groundtruth and an answer from a model. + +If the answer aligns with the groundtruth, output "Yes". Otherwise, output "No". + +Your output should only be "Yes" or "No". + +groundtruth: {gold} + +answer: {pred} + +Figure 11: Prompt for verification with GPT-3.5-Turbo. + +# B Details of SFT Experiments + +# B.1 Training + +To enhance the instruction following ability, we append task-specific instructions (i.e., MCQ, short answer) to questions. The system prompt shown in Figure 12 is used. We use a global batch size of 128. Models are trained for 190 steps on 25K samples and 985 steps on 126K samples. All experiments are run on $8^{*}\mathrm{H}100$ GPUs. + +Interestingly, we observe loss spikes for 25K SFT training on Qwen2-VL-7B which causes model collapse. Therefore, we run the settings for multiple times until we obtain a normal loss curve, and use that checkpoint for evaluation. + +You are VL-Thinking, a helpful assistant with excellent reasoning ability. A user asks you a question, and you should try to solve it. You should first think about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within and tags, respectively, i.e., reasoning process here answer here . + +Figure 12: System Prompt used for training and evaluation. + +# B.2 Evaluation + +We adopt VLMEvalKit (Duan et al., 2024) for all evaluation experiments. We set use(custom_prompt to False following the settings of most models in the toolkit. For higher efficiency, we set maxPixels to $256^{*}32^{*}32$ , and max_new_tokens to 800. We also set system prompt as the one we used for training for a consistent training-test behavior. The other hyperparameters are default to the original toolkit. + +We specify the split of datasets and metrics reported: + +1. MathVista: The Test Mini split of MathVista dataset; overall accuracy. +2. MathVision: The Full test set of MathVision; overall accuracy. +3. MathVerse: The Test Mini split of MathVerse; accuracy of "Vision Only". +4. DynaMath: The Full test set of DynaMath; overall accuracy. +5. WeMath: The Test Mini split of WeMath; "Score (Strict)". +6. LogicVista: The Full test set of LogicVista; overall accuracy. + +# C Details of GRPO Experiments + +# C.1 Training + +We adapt our code from OpenRLHF framework (Hu et al., 2024). To suit for our need of deploying a reward model on the same machine, we offload the reward model to CPU and only move it to GPU when performing rollouts and scoring. This design saves valuable GPU memory which accelerates the training process. + +We also perform dataset-specific inspection and find some issues for several datasets. For example, although ArxivQA contains only MCQ, the answer format includes "A", "A)", "(a)", etc. And in the synthesis subset of Math PUMA, we find that some solutions only contain the value of solved unknown variables when the questions ask to output the entire function expression. We fix these issues by rule-based filtering and GPT-assisted rewriting, aiming to improve the quality of the VL-Thinking dataset. + +# C.2 Evaluation + +We evaluate our models with an identical setting described in Appendix B.2. + +# C.3 Case Study + +We present a case demonstrating the improvement of VLAA-Thinker-Qwen2.5VL-7B over its backbone in Figure 13. + +Figure 13: A case from MathVerse testmini (sample index 20). Markdowns are rendered for illustration purpose. Wrong reasoning paths are colored red. +![](images/c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg) +As shown in the figure, the angle $O$ to circle $A$ at the center of triangle $B$ , $AB$ , and $AC$ . +Choices: A: $36^{\circ}$ ; B: $54^{\circ}$ ; C: $60^{\circ}$ ; D: $27^{\circ}$ . + +# Question in Image + +As shown in the figure, if angle $A = 36.0$ , then angle $C$ is equal to () +Choices: A: $36^{\circ}$ B: $54^{\circ}$ C: $60^{\circ}$ D: $27^{\circ}$ + +# Qwen2.5VL-7B + +To solve the problem, we need to use some properties of circles and tangents. + +# 1. Identify the given information: + +- AB is tangent to circle O at point B. +- The angle $\angle AOB = 36^{\circ}$ + +# 2. Understand the properties: + +- Since $AB$ is tangent to the circle at $B$ , the radius $OB$ is perpendicular to the tangent $AB$ . Therefore, $\angle OBA = 90$ . +-In triangle $OAB$ , the sum of the angles is $180^{\circ}$ . So, we can find $\angle AOB$ .. + +$\angle AOB + \angle OBA + \angle OAB = 180^{\circ}$ + +$36^{\circ} + 90^{\circ} + \angle OAB = 180^{\circ}$ + +$\angle OAB = 54^{\circ}$ + +# 3. Determine the measure of $\angle C$ + +- Since $\angle OAB$ is an inscribed angle that intercepts the arc BC, the measure of the central angle $\angle BOC$ that + +intercepts the same arc BC is twice the measure of the inscribed angle $\angle OAB$ : + +$$ +\angle B O C = 2 \times \angle O A B = 2 \times 5 4 ^ {\circ} = 1 0 8 ^ {\circ} +$$ + +- Now, in triangle OBC, the sum of the angles is $180^{\circ}$ . So, we can find $\angle C$ : + +$\angle OBC + \angle OCB + \angle BOC = 180^{\circ}$ + +$90^{\circ} + \angle OCB + 108^{\circ} = 180^{\circ}$ + +$\angle OCB = 180^{\circ} - 198^{\circ} = 18^{\circ}$ + +However, since $\angle OCB$ is not directly given as an option, we need to re-evaluate the problem with the co + +interpretation of the angle $\angle C$ which is actually the angle subtended by the arc BC at the circumference, which is half the central angle: + +$$ +\angle C = \frac {1}{2} \times \angle B O C = \frac {1}{2} \times 1 0 8 ^ {\circ} = 5 4 ^ {\circ} +$$ + +Therefore, the measure of $\angle C$ is $54^{\circ}$ . + +The answer is: B + +# VLAA-Thinker-Qwen2.5VL-7B + +# + +To solve this problem, we need to use the properties of tangents and circles. Since AB is tangent to circle O at point B, the radius OB is perpendicular to the tangent AB at point B. This means that angle OBA is $90^{\circ}$ . + +that angle A is $36^{\circ}$ , we can find angle AOB by subtracting angle A from $180^{\circ}$ (since the sum of angles in triangle AOB must be $180^{\circ}$ ): + +$$ +\angle A O B = 1 8 0 ^ {\circ} - 3 6 ^ {\circ} - 9 0 ^ {\circ} = 5 4 ^ {\circ} +$$ + +Since angle AOB is a central angle subtending arc BC, and angle C is an angle subtending the same arc BC, the measure of angle C is half the measure of angle AOB: + +$$ +\angle C = \frac {1}{2} \times 5 4 ^ {\circ} = 2 7 ^ {\circ} +$$ + +Therefore, the correct answer is D: $27^{\circ}$ + +# + +# + +# D:27° + +# + +GeoQA +A VL-Thinking sample from GeoQA170K. +![](images/03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg) +Question: Given a semicircular figure with the diameter represented by AB and intersecting chords AD and BC at point P, where angle DPB measures 60.0 degrees, calculate the value of tan(angle ADC) under the condition that D denotes the midpoint of the arc BC. Choices: A $\frac{1}{2}$ , B $\frac{1}{3}$ , C $\frac{\sqrt{3}}{3}$ , D $\sqrt{3}$ . + $\bullet$ Original Answer: C + +# VL-Thinking + +Okay. So I'm trying to solve this geometry problem. Let me see if I can visualize it correctly. There's a semicircle with diameter AB. Points C and D are on the curved edge, and chords AC and BD intersect at point P. Below AB, there's the center O. The angle DPB is 60 degrees, and D is the midpoint of arc BC. I need to find the value of tan $(\angle A.D.C)$ . + +First, I should probably draw this out to get a better sense. I imagine a semicircle with $AB$ as the diameter. $O$ is the center, so $AO = OB$ . Points C and D are on the semicircle, with $D$ being the midpoint of arc BC. That means $BD$ is equal to arc DC. Since it's a semicircle, the total arc from $B$ to $C$ is 180 degrees, so each arc is equal to one angle. The sum of these two angles is $2\pi$ , so $AB$ is equal to $CD$ . This is the exact measure of the semicircle. So if BC is some arc, $D$ is halfway along it. But it isn't sure about the exact measure yet. + +Chords $\mathrm{AC}$ and $\mathrm{DB}$ intersect at $P$ , and angle DPB is 60 degrees. I remember that when two chords intersect, the products of the segments are equal. So $A'P = PC = B'P$ . But I'm not sure if that's directly useful. + +Since $D$ is the midpoint of arc BC, it might have some special properties. In a semicircle, the midpoint of an arc would be equidistant from B and C. Also, angles subtended by the same arc are equal. Maybe I can use some properties of cyclic quadrilaterals here, but since it's a semicircle, all points lie on the circumference, so ABCD is a cyclic quadrilateral. + +Wait, but AICD is a quadrilateral inscribed in a semicircle, which makes it a cyclic quadrilateral. In such cases, opposite angles sum to 180 degrees. But I'm not sure if that's applicable here because we're dealing with a circle. + +Let me think about the angles. Angle DPB is 60 degrees. Since $\mathrm{P}$ is the intersection of chords AC and BD, maybe I can use the property that the angle between two chords is equal to half the sum of the measures of the intercepted arcs. So angle DPB would be half the sum of arcs DB and AC. But I'm not sure about the exact arcs involved. + +Alternatively, maybe I can use triangle properties. Since $\mathbb{E}$ is the midpoint of arc $BC$ , triangle $DBC$ might be a triangle in the sense that it has a right triangle and a left triangle in the sense of a semicircle as a right triangle. So angle $BDC$ to degree $90^{\circ}$ + +Wait, that's an important point. Since D is on the semicircle, triangle BDC is right-angled at D. So angle BDC is 90 degrees. That might help in finding angle ADC. + +1. need to find $\tan (\angle DC)$ . Let's denote angle ADC as $\theta$ . So $\tan (\theta)^{\circ}$ opposite/adjacent. To find this, I might need to find the lengths of the sides of triangle ADC or use some trigonometric identities. + +Let me consider the coordinates approach. Let's place the semicircle on a coordinate system with center $O$ at $(0,0)$ , and $AB$ as the diameter along the $x$ -axis from $(-r,0)$ to $(r,0)$ , where $r$ is the radius. Then point $D$ , being the midpoint of arc BC, would be somewhere on the semicircle. But without knowing the exact position of $C$ , this might be tricky. + +Alternatively, maybe I can use power of a point. Since $P$ is the intersection of chords $\mathrm{AC}$ and $\mathrm{BD}$ , the power of point $P$ with respect to the circle is equal for both chords. So $PA:PC = PB:PD$ . But that without specific choice of $P$ would be a contradiction. + +Wait, maybe I can assign some variables. Let's let the radius of the semicircle be $\mathrm{r}$ . Then $\mathrm{AB} = 2\pi$ . Let me assume $r = 1$ for simplicity, so $\mathrm{AB} = 2\pi$ . Then $\mathrm{O}$ is at $(0,0)$ , $\mathrm{A}$ is at $(-1,0)$ , $\mathrm{B}$ is at $(1,0)$ . Let me place point C somewhere on the semicircle, say at an angle $\theta$ from the $x$ -axis. Then point D, being the midpoint of arc BC, would be at an angle $\frac{\theta}{2} + 180^{\circ}$ from the $x$ -axis, but I'm not sure. + +Alternatively, since $D$ is the midpoint of arc BC, the arc BOD equals DC. So if I consider the central angles, angle BOD equals angle DOC. Since the total arc BC is some measure, say 2σ, then each arc BOD and its corresponding arc BOD are equal. + +Wait, maybe I can use the fact that angle DPB is 60 degrees. Using the intersecting chords theorem, angle DPB is equal to half the sum of the measures of arcs DB and AC. So angle DPB = $\frac{1}{2} (\angle ABC + \angle AC) = 60$ degrees. Therefore, any $\angle DB + \angle AC = 170^{\circ}$ degrees. + +But I'm not sure about the measure of arc AC. Alternatively, maybe I can express arc AC in terms of other arcs. Since the total semicircle is 180 degrees, Arc AB is 180 degrees. Arc AC is part of that, so arc AB is a part of the semicircle. + +Alternatively, since D is the midpoint of arc BC, $\mathrm{BC} = \mathrm{BD}$ are $\mathrm{AC} = \mathrm{DC}$ . Let's denote arc $B D = a\angle DC = x$ . Then $a\angle B C = 2x$ . Since the total semicircle is 180 degrees, are AB = 180 degrees, so are AC = 180 degrees. + +Now, from the intersecting chords theorem, angle $\mathrm{DPB} = \frac{1}{2} (\arctan DB + \arctan AC) = 60$ degrees. So $\frac{1}{2} (x + (180 - 2x)) = 60$ . Simplifying, $\frac{1}{2} (180 - x) = 60$ . Multiply both sides by $2 \div 180 - x = 120$ . Therefore, $x = 60$ degrees. + +So are BD = arc DC = 60 degrees, and arc BC = 120 degrees. Then arc AC = 180 - 120 = 60 degrees. Now, I need to find tangent angle ADC. Let's consider triangle ADC. Since D is on the semicircle, angle ADC is an inscribed angle. The measure of angle ADC is half the measure of arc AC. Since arc AC is 60 degrees, angle ADC = 30 degrees. + +Therefore, $\tan (\text{angle ADC}) = \tan (30\text{ degrees}) = \frac{1}{\sqrt{3}} = \frac{\sqrt{3}}{3}$ . +Wait, but let me double-check. If angle ADC is 30 degrees, then $\tan (30)$ is indeed $\frac{\sqrt{3}}{3}$ . So the answer should be option C: $\frac{\sqrt{3}}{3}$ . + +The value of tan(angle ADC) is $\frac{\sqrt{3}}{3}$ + +# Step-by-Step Explanation: + +# 1. Understanding the Problem: + +- We have a semicircle with diameter AB and center O. +- Points C and D are on the semicircle, with D being the midpoint of arc BC. +- Chords AC and BD intersect at point P, forming an angle DPB of 60 degrees. +- We need to find tan $(\angle ACD)$ . + +2. Key Properties and Theorems: - Intersecting Chords Theorem: The measure of an angle formed by two intersecting chords is half the intersection area. + +- Midpoint of Arc: Since D is the midpoint of arc BC, arcs BD and DC are equal. +- Inscribed Angle Theorem: An angle inscribed in a circle is half the measure of its intercepted arc. + +3. Applying the Intersecting Chords Theorem: +- Angle DPB is formed by chords BD and AC, so \(\angle DPB = \frac{1}{2}\angle (AB + arc AC)\) +- Given \(\angle DPB = 60^{\circ}\), we have +\[ 60^{\circ} = \frac{1}{2} (\arccos DB + \arccos AC) \] +\(\arccos DB + \arccos AC = 120^{\circ} \] + +4. Determining Are Measures: Since D is the midpoint of arc BC, arcs BD and DC are equal. Let each be x. Thus, are BC = 2x. The total semicircle is $180^{\circ}$ , so: arc AC = 180° - arc BC = 180° - 2x. Substituting into the earlier equation: arc AC = 180° - arc BC = 180° - 2x. Therefore, arcs BD and DC are each 60° and arc AC is 60°. + +5. Finding Angle ADC: +- Angle ADC is an inscribed angle intercepting arc AC. +- By the Inscribed Angle Theorem: + $\angle ADC = \frac{\pi}{2} \times \text{arc AC} = \frac{1}{2} \times 60^{\circ} = 30^{\circ}$ + +6. Calculating the Tangent $\tan (30^{\circ}) = \frac{\sqrt{3}}{4}$ Answer $\frac{\sqrt{3}}{4}$ (Option C) + +# Synthesis + +# Input Image + +![](images/a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg) +Figure 15: A VL-Thinking sample from Math PUMA (subset Synthesis). + +- Question: What is the function's equation? + +- Original Answer: Step 1: Substitute the point $(\frac{\pi}{10},5)$ into the function: $5 = a\cdot \sin (-5\cdot \frac{\pi}{10})$ . Step 2: Simplify the equation: $5 = a\cdot \sin (\frac{-\pi}{2})$ . Step 3: Since $\sin (\frac{-\pi}{2}) = -1$ , we have: $5 = -a$ . Step 4: Solve for $a:a = -5$ Step 5: Substitute the value of a back into the original function: $y = -5\cdot \sin (-5\cdot x)$ . Answer: $y = -5\cdot \sin (-5\cdot x)$ + +# VL-Thinking + + First, I recognize that the function given is $y = a\cdot \sin (-5x)$ , and it passes through the point $\left(\frac{\pi}{10},5\right)$ + +To find the value of $a$ , I'll substitute the coordinates of the point into the equation. + +Substituting $x = \frac{\pi}{10}$ and $y = 5$ , the equation becomes: $5 = a \cdot \sin \left(-5 \cdot \frac{\pi}{10}\right)$ + +Simplifying the argument of the sine function: + +$$ +- 5 \cdot \frac {\pi}{1 0} = - \frac {\pi}{2} +$$ + +So, the equation now is: + +$$ +5 = a \cdot \sin \left(\frac {\pi}{2}\right) +$$ + +I know that $\sin \left(-\frac{\pi}{2}\right) = -1$ so substituting this value in: $5 = a\cdot (-1)$ + +Solving for $a$ : + +$$ +5 = a \cdot (- 1) +$$ + +Finally, substituting $a = -5$ back into the original function gives: + +$$ +y = - 5 \cdot \sin (- 5 x) +$$ + +$$ +< / t h i n k > +$$ + +To determine the equation of the function $y = a \cdot \sin(-5x)$ that passes through the point $\left( \frac{\pi}{10}, 5 \right)$ , follow these steps: + +1. Substitute the Point into the Equation: + +Plug $x = \frac{\pi}{10}$ and $y = 5$ into the equation: + +$$ +5 = a \cdot \sin \left(- 5 \cdot \frac {\pi}{1 0}\right) +$$ + +2. Simplify the Argument of the Sine Function: + +$$ +- 5 \cdot \frac {\pi}{1 0} = - \frac {\pi}{2} +$$ + +So, the equation becomes: + +$$ +5 = a \cdot \sin \left(- \frac {\pi}{2}\right) +$$ + +3. Evaluate the Sine Function: + +$$ +\sin \left(- \frac {\pi}{2}\right) = - 1 +$$ + +Substituting this back into the equation: + +$$ +5 = a \cdot (- 1) +$$ + +4. Solve for $a$ : + +$$ +a = - 5 +$$ + +5. Write the Final Equation: + +Substitute $a = -5$ back into the original function: + +$$ +y = - 5 \cdot \sin (- 5 x) +$$ + +6. Simplify the Negative Sign (Optional): + +Since $\sin (-\theta) = -\sin (\theta)$ , the equation can also be written as: + +$$ +y = 5 \cdot \sin (5 x) +$$ + +Final Answer: + +$$ +y = - 5 \sin (- 5 x) +$$ + +# CLEVR-MATH + +# Input Image + +![](images/340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg) + +- Question: Subtract all balls. How many objects are left? +- Original Answer: 3 + +# VL-Thinking + +![](images/99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg) + + Okay, let's see. The user is asking to subtract all balls and determine how many objects are left. First, I need to figure out which objects are considered "balls" in the image. The image shows a large blue metal sphere. A sphere is a three-dimensional circle, which is essentially a ball shape. So the blue metal sphere is definitely a ball. + +Next, the surrounding objects are two small red metal cylinders and a single small brown metal cylinder in the foreground. Cylinders are not balls; they have a different shape—circular bases and straight sides. So those would not be subtracted. The question says to subtract all balls, so only the sphere (ball) needs to be removed. + +Originally, there's the sphere (1 object), two red cylinders (2 objects), and one brown cylinder (1 object). Total objects before subtraction: $1 + 2 + 1 = 4$ . Subtracting the one ball leaves 3 objects: the two red cylinders and the brown cylinder. + +Wait, but I should make sure there are no other balls mentioned. The image only refers to the sphere as a ball-like object. The cylinders are clearly not balls. So subtracting the sphere leaves three cylinders. The answer should be 3. + + + +In the scene, there is one large blue sphere (a ball-shaped object) centrally positioned. The remaining objects consist of two small red cylinders flanking it symmetrically and a single brown cylinder in the foreground. Removing the sphere leaves only the three cylindrical objects visible. + +Answer: There are 3 objects remaining. + +Figure 16: A VL-Thinking sample from CLEVR-Math. + +# ArxivQA + +# Input Image + +![](images/ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg) +Figure 17: A VL-Thinking sample from ArxivQA. + +![](images/a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg) + +![](images/7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg) + +![](images/9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg) + +- Question: In figure (b), how does the repetition rate affect the ratio of 9th to 11th harmonic intensity for the $1030\mathrm{nm}$ wavelength? Choose from the following options A) The ratio increases as the repetition rate increases. B) The ratio decreases as the repetition rate increases. C) The ratio remains constant as the repetition rate increases. D) The ratio fluctuates unpredictably with changes in repetition rate. +- Original Answer: B + +# VL-Thinking + + + +![](images/dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg) + +Okay, so I'm looking at figure (b) here, and I need to figure out how the repetition rate affects the ratio of 9th to 11th harmonic intensity for the $1030\mathrm{nm}$ wavelength. Let me start by understanding what's in the image. + +First, the image is a scatter plot with the x-axis being the repetition rate in kHz and the y-axis being the ratio of 9th to 7th and 11th harmonic intensity on a logarithmic scale. There are two sets of data points: red dots and blue squares. The red dots are probably representing the $1030\mathrm{nm}$ wavelength since in panel (a), the red curve was labeled as $1030\mathrm{nm}$ . The blue squares would then correspond to the $515\mathrm{nm}$ wavelength. + +The question specifically asks about the $1030\mathrm{nm}$ wavelength, so I'll focus on the red dots. Looking at the scatter plot, as the repetition rate increases from left to right on the x-axis, I need to see what happens to the y-axis values for the red dots. + +From what I can see, as the repetition rate goes up, the red dots trend downward. That means the ratio of 9th to 11th harmonic intensity is decreasing. There's no indication of an increase or a constant ratio; instead, it's a clear downward trend. The red dots don't seem to fluctuate unpredictably either; they follow a decreasing pattern as repetition rate increases. + +So, putting it all together, the ratio decreases as the repetition rate increases for the $1030\mathrm{nm}$ wavelength. That would correspond to option B. + + + +The image shows that the ratio of 9th to 11th harmonic intensity for the $1030\mathrm{nm}$ wavelength decreases as the repetition rate increases. + +Answer: B) The ratio decreases as the repetition rate increases. + +# ALLaVA- LAION + +# Input Image + +![](images/88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg) +Figure 18: A VL-Thinking sample from ALLaVA-LAION. + +- Question: Considering the proximity of Preston Store to other landmarks, what could be its significance in the local area? +- Original Answer: The Preston Store's significance in the local area, as suggested by the map, could be quite central given its prominent placement and the fact that it has a dedicated Facebook page marker. It appears to be situated near major roads and commercial entities like KFC, Harvey Norman, and Dan Murphy's, which indicates it is likely a well-known establishment, possibly a retail store or a social venue, that serves as a local landmark or gathering place. The specific type of store cannot be determined from the map alone, but its central location among various retailers and close to a park suggests it may cater to a wide range of customers and possibly holds a key place in the local community. + +# VL-Thinking + + + +![](images/e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg) + +Okay, let's see. The user is asking about the significance of Preston Store in the local area based on its proximity to other landmarks. First, I need to visualize the map shown. The Preston Store is labeled prominently with a red marker and a Facebook 'f' logo, so it's probably a key location. + +Looking at the nearby landmarks, there's KFC, Harvey Norman, Dan Murphy's, and Max Kirwan Mazda. Those are all big-name retailers and services. Then there's Target, Myer Northland, and Anaconda Preston. These are major stores, which suggests the area is a commercial hub. The store being near these places means it's likely in a busy shopping district, making it a central spot for locals. + +Also, there's TW Blake Park and Darebin Creek Forest Park nearby. Green areas mean that Preston Store might be a place people visit before or after outdoor activities. Maybe it's a convenience store or a community spot where people gather. The mix of retail, dining, and parks indicates it's a multifunctional area, so Preston Store could serve both daily shopping needs and act as a meeting point. The Facebook marker suggests they have an online presence, maybe promoting local events or services, enhancing community engagement. Since it's near major roads like Bell St and Murray Rd, accessibility is good, increasing its importance as a central landmark. + + + +Preston Store appears centrally positioned within a bustling commercial and recreational hub, making it a likely focal point for local activity. Its prominent placement near major retailers like Target, Myer Northland, and Harvey Norman suggests it could serve as a practical anchor for shoppers—perhaps offering essentials or services that complement these larger stores. The proximity to fast food (KFC) and leisure destinations (Darebin Creek Forest Park, TW Blake Park) positions it as a potential pitstop for families or individuals transitioning between errands, dining, and outdoor activities. The red Facebook marker implies community engagement, possibly acting as an information node or local business with social media visibility. Its accessibility via intersecting roads like Bell St and Murray Rd further underscores its role as a convenient landmark in the area's daily flow. \ No newline at end of file diff --git a/data/2025/2504_11xxx/2504.11468/images/03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg b/data/2025/2504_11xxx/2504.11468/images/03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg new file mode 100644 index 0000000000000000000000000000000000000000..c4edbc9aa791af99a9ec5a16e89102cdb99e8ed7 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:383e9ad4ba6eff6159d7c4e0751dd3f08ab07b1a4e53348c942ab518060b5b68 +size 5092 diff --git a/data/2025/2504_11xxx/2504.11468/images/08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg b/data/2025/2504_11xxx/2504.11468/images/08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg new file mode 100644 index 0000000000000000000000000000000000000000..79334873dcea1db4ba980d543f62550bfbb50fa4 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f1c5337b5320ce12af72731578d0180c599fce1dd468a4b0042ac59fc34f9bb +size 24775 diff --git a/data/2025/2504_11xxx/2504.11468/images/0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg b/data/2025/2504_11xxx/2504.11468/images/0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1b662e6254de0521c89cb821bfa3cc071497f60e --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4846a99aa765e3b19959740b2582aeccae7acf80d24ff868116b97d7843916c7 +size 32659 diff --git a/data/2025/2504_11xxx/2504.11468/images/0f4e09494c318c7359cf1332a20efdadb11939e69093d589d0bbff0f4cfe23dd.jpg b/data/2025/2504_11xxx/2504.11468/images/0f4e09494c318c7359cf1332a20efdadb11939e69093d589d0bbff0f4cfe23dd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a8c340fef63e471c0f2d9f46cbed096d9e6d967d --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/0f4e09494c318c7359cf1332a20efdadb11939e69093d589d0bbff0f4cfe23dd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab8e981050a1688d5096db78522e82cdeda51df970483181a6697c38ee096957 +size 981 diff --git a/data/2025/2504_11xxx/2504.11468/images/19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg b/data/2025/2504_11xxx/2504.11468/images/19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg new file mode 100644 index 0000000000000000000000000000000000000000..11a07957bbd89eb0608b2794de6680eb64a0898d --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9fd0d1b590f77540d6c48070d6996ae4e5e4d7099246e6b6dd9e23e4823dfd1 +size 24143 diff --git a/data/2025/2504_11xxx/2504.11468/images/26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg b/data/2025/2504_11xxx/2504.11468/images/26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..13d3742281df2b250ff86a09ba89a68494490eb8 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:02db599484ae9eae9065626033422852ff4f92d37f228bd869bf5f3b2b01f89c +size 94265 diff --git a/data/2025/2504_11xxx/2504.11468/images/2c445a281743847d237a6e154962aaadda8a7ceed193d6a6525739f75619501f.jpg b/data/2025/2504_11xxx/2504.11468/images/2c445a281743847d237a6e154962aaadda8a7ceed193d6a6525739f75619501f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4785b5cdd00f290e2a771509658b63ce8573cec2 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/2c445a281743847d237a6e154962aaadda8a7ceed193d6a6525739f75619501f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fafee6b8f95064b8e338db1edea608226accecb98ab087d1e9366bb8b1c5d5e8 +size 1835 diff --git a/data/2025/2504_11xxx/2504.11468/images/340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg b/data/2025/2504_11xxx/2504.11468/images/340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4d7ee1a42afb25dc547eed815afe050e8c188fc1 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8e5b67b980ba66decd7f751356da7c619f4d2dbb9ffcef1ebfbd2a030677f2d +size 6255 diff --git a/data/2025/2504_11xxx/2504.11468/images/3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg b/data/2025/2504_11xxx/2504.11468/images/3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..17d4ce478e0f2a6b6fed5ada96d18ca42a6665fb --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01f802a0cb597ce24b876473c1ab03c54a0b1dcc708ea2455daa16d18e7e636d +size 21341 diff --git a/data/2025/2504_11xxx/2504.11468/images/3608349c31ea56b6157a1e78973700e1497cf8d0f0aedbd6f16e5b2f03790f07.jpg b/data/2025/2504_11xxx/2504.11468/images/3608349c31ea56b6157a1e78973700e1497cf8d0f0aedbd6f16e5b2f03790f07.jpg new file mode 100644 index 0000000000000000000000000000000000000000..034a9fa721a3611153ea740fdbee59fbc37097dd --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/3608349c31ea56b6157a1e78973700e1497cf8d0f0aedbd6f16e5b2f03790f07.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:081aa1dc9e549b2148e5e207dcbe53e9cfd09cf968d3436408822e60585e82c3 +size 1426 diff --git a/data/2025/2504_11xxx/2504.11468/images/468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg b/data/2025/2504_11xxx/2504.11468/images/468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9ad1d51215df2b62ecd7961b8932a12044bff6e6 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:496ee068123cd3c28ab8fb273f0b70ab3f75d47eac9e6624bc15a42e1eb777a7 +size 21167 diff --git a/data/2025/2504_11xxx/2504.11468/images/494e2a32b86da5c55a1e0d1d8d99d6176e929684cf3b7d5cc327bbe413e37432.jpg b/data/2025/2504_11xxx/2504.11468/images/494e2a32b86da5c55a1e0d1d8d99d6176e929684cf3b7d5cc327bbe413e37432.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f15cd26c0747f0911819ba3833b0dca15e46bb84 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/494e2a32b86da5c55a1e0d1d8d99d6176e929684cf3b7d5cc327bbe413e37432.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b3b2e7de5e92b6ac60df2b6cd8bd768558f3103d24a0a8930169dca049cb35a0 +size 44569 diff --git a/data/2025/2504_11xxx/2504.11468/images/50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg b/data/2025/2504_11xxx/2504.11468/images/50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..47f45b85299569f42b92b215472fcd4553f1ea57 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4da0928939c99587cf76f32ab99d580c59b35978a21182144d94ab2a37ec97ab +size 23609 diff --git a/data/2025/2504_11xxx/2504.11468/images/5503476800119a465e6e9370d9dd1e8bbc73b614402e2743dda0bbac66d59b33.jpg b/data/2025/2504_11xxx/2504.11468/images/5503476800119a465e6e9370d9dd1e8bbc73b614402e2743dda0bbac66d59b33.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ab3c24d8fe790b7ae5fc6ca48a06e431a8b34ba5 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/5503476800119a465e6e9370d9dd1e8bbc73b614402e2743dda0bbac66d59b33.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:42dc90b68234657e6567c6e1f70f499c50eb465b1e131dba13f26d92bf932064 +size 933 diff --git a/data/2025/2504_11xxx/2504.11468/images/6c35054fe7662d6a569d6d46a85b0f8f0c70def9f4551853bc47d457230725cb.jpg b/data/2025/2504_11xxx/2504.11468/images/6c35054fe7662d6a569d6d46a85b0f8f0c70def9f4551853bc47d457230725cb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..39492b33291fb88dc086b69b5414a1cdec836af9 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/6c35054fe7662d6a569d6d46a85b0f8f0c70def9f4551853bc47d457230725cb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2373663797961042f8ee2947df482deb87466436020eaa5c803ef55d70280762 +size 3213 diff --git a/data/2025/2504_11xxx/2504.11468/images/77f89a6f81276be573df3fa7e88e1d19bfb7df7e4db8a9fe4dfd25d29931bcc3.jpg b/data/2025/2504_11xxx/2504.11468/images/77f89a6f81276be573df3fa7e88e1d19bfb7df7e4db8a9fe4dfd25d29931bcc3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0b123b35d63bf8f00c595967e3670c5752fa3089 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/77f89a6f81276be573df3fa7e88e1d19bfb7df7e4db8a9fe4dfd25d29931bcc3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b99b4b0d9446a09587808d524c921538c5ef344c7f6c3803daddab7b1ea413f0 +size 55657 diff --git a/data/2025/2504_11xxx/2504.11468/images/789451f86f759b63ebf7a1797214718dee7381be2657500ce0418d0db1dc11ca.jpg b/data/2025/2504_11xxx/2504.11468/images/789451f86f759b63ebf7a1797214718dee7381be2657500ce0418d0db1dc11ca.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b9683b66978e9974d24c76345660e4aa4b0c1ecd --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/789451f86f759b63ebf7a1797214718dee7381be2657500ce0418d0db1dc11ca.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f27d369c74937d06448b59aa7c7c3b686bf86db97cbb5d6ae5bc2f06666cd268 +size 12877 diff --git a/data/2025/2504_11xxx/2504.11468/images/7f4781094f3dc898eb5b70b1c22e810b63b3bd3e9ed406195b5fbd4f8a682819.jpg b/data/2025/2504_11xxx/2504.11468/images/7f4781094f3dc898eb5b70b1c22e810b63b3bd3e9ed406195b5fbd4f8a682819.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f9249c87794f7cd333d2477b5987fd7b9f8661aa --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/7f4781094f3dc898eb5b70b1c22e810b63b3bd3e9ed406195b5fbd4f8a682819.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:19851c53c695637f43b21317631d7dd711cf20211e27b035f0cca7137c95b140 +size 1553 diff --git a/data/2025/2504_11xxx/2504.11468/images/7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg b/data/2025/2504_11xxx/2504.11468/images/7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..88c89eee752f64fe4d15493a3f5a8b756a190e4c --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ed937722aa38f892f678a8005ef8e206d420a0b86bcbb4df5563937bfc03a74 +size 4426 diff --git a/data/2025/2504_11xxx/2504.11468/images/85e2de31cb56e15b9925e1ed8bad0a7db2adbbbbd97433446a6f6d123f2f4fb1.jpg b/data/2025/2504_11xxx/2504.11468/images/85e2de31cb56e15b9925e1ed8bad0a7db2adbbbbd97433446a6f6d123f2f4fb1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1b9ac10b7b4c44917e4e624f0a35924869188a14 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/85e2de31cb56e15b9925e1ed8bad0a7db2adbbbbd97433446a6f6d123f2f4fb1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96fb553d22d698b22c27ee7ab26349bb22c5d9f5985736a0cdba63fc0df25842 +size 23201 diff --git a/data/2025/2504_11xxx/2504.11468/images/865fe11a6d170b6bdc7b87a78b3772ab11eaf74bd327c719dd571b6e291090a2.jpg b/data/2025/2504_11xxx/2504.11468/images/865fe11a6d170b6bdc7b87a78b3772ab11eaf74bd327c719dd571b6e291090a2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..febee4013bd39f8eaf191a2a3b98a9554d14b7f8 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/865fe11a6d170b6bdc7b87a78b3772ab11eaf74bd327c719dd571b6e291090a2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34daac884a05f39eefc4f0b7831d1200ece8a8778a5eb37f18468f8d51fc44aa +size 1092 diff --git a/data/2025/2504_11xxx/2504.11468/images/88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg b/data/2025/2504_11xxx/2504.11468/images/88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b087af7df08766bb425747d909ea6816d53be9e3 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b236ba46b0c773211bd33e2769d7344d9b70f605bd6b29df527a30533f9bcd3 +size 15310 diff --git a/data/2025/2504_11xxx/2504.11468/images/8fd549e979e11ed75a0743f4e7e9932ff3105f22c1a70ae7985141d2a4fca457.jpg b/data/2025/2504_11xxx/2504.11468/images/8fd549e979e11ed75a0743f4e7e9932ff3105f22c1a70ae7985141d2a4fca457.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a43de5f0bb3cc844df610bb20048b11d8cf4218b --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/8fd549e979e11ed75a0743f4e7e9932ff3105f22c1a70ae7985141d2a4fca457.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76cad0718ad2cc361c755d5ccc96fc9f0fe1c205251b32efbfebec025442f1da +size 1254 diff --git a/data/2025/2504_11xxx/2504.11468/images/97da3b5e32451d061d6ec7d700ce9e3f4085b4ad25cedfee4a67dc020cc4a9f7.jpg b/data/2025/2504_11xxx/2504.11468/images/97da3b5e32451d061d6ec7d700ce9e3f4085b4ad25cedfee4a67dc020cc4a9f7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..ac76cc72bd2f4c274e5962a4a65e016b0f6bafa6 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/97da3b5e32451d061d6ec7d700ce9e3f4085b4ad25cedfee4a67dc020cc4a9f7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b189a9f741b55958bd3810b21eec21ec6fca52f6e4ba0420a538c00865f4874 +size 2358 diff --git a/data/2025/2504_11xxx/2504.11468/images/99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg b/data/2025/2504_11xxx/2504.11468/images/99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8f0f61525d640a42113dced29ea8c1740b9ef7a8 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d73613c8fd6b8657743babbb48eddd5f3a35c88326f10f06ce387fc2603acf2d +size 1485 diff --git a/data/2025/2504_11xxx/2504.11468/images/9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg b/data/2025/2504_11xxx/2504.11468/images/9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..108318b6a4c66831acd64acb4e5745a5c3f7355c --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbf1032b185bfe7fa787ac841a8d8d165415338e948713a2c6cbf78fda8994b8 +size 5108 diff --git a/data/2025/2504_11xxx/2504.11468/images/a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg b/data/2025/2504_11xxx/2504.11468/images/a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a086f3702bb671889c142f2e62b780f70de37a32 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed94ce0420b95873a7cb17eed4218b312e0444a667a25bf997e9e5c106fb89fa +size 4925 diff --git a/data/2025/2504_11xxx/2504.11468/images/a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg b/data/2025/2504_11xxx/2504.11468/images/a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f84756cabbcbbb52f5fde9fe57eeda1e20a0da66 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ef8f4aeb5863d07a5202a65b96c92d4f068603b0fdafed3eb35689433d14271 +size 5647 diff --git a/data/2025/2504_11xxx/2504.11468/images/a82b4589b35ec28c0cd172ec5f2c60dd54325fe2f1ac6d0a605b6a3550ea17dc.jpg b/data/2025/2504_11xxx/2504.11468/images/a82b4589b35ec28c0cd172ec5f2c60dd54325fe2f1ac6d0a605b6a3550ea17dc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..20d6f61499fd3620c8cac2a2547524bbcdfce4b3 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/a82b4589b35ec28c0cd172ec5f2c60dd54325fe2f1ac6d0a605b6a3550ea17dc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f897776db7e20173a55b9e6611d079b637c8972730140a0d0af4c79bc626179 +size 1416 diff --git a/data/2025/2504_11xxx/2504.11468/images/ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg b/data/2025/2504_11xxx/2504.11468/images/ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..1ee289c14e532907f5607bb318060af9081b6f04 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1f2938116e0b46d98d9aac7cfe13ff71925970ec894222cf5410d549643cd50 +size 6373 diff --git a/data/2025/2504_11xxx/2504.11468/images/b3620181fc95c9bc02765935a760c35c491f17445af297296fb814b062cdd344.jpg b/data/2025/2504_11xxx/2504.11468/images/b3620181fc95c9bc02765935a760c35c491f17445af297296fb814b062cdd344.jpg new file mode 100644 index 0000000000000000000000000000000000000000..7642df025191bb6a1dcd66a65c6c58808cb0d18a --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/b3620181fc95c9bc02765935a760c35c491f17445af297296fb814b062cdd344.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e3e762af06f2cd91d43d014a18cc9ff3182da01bdacf5d071dc281e1ab789b98 +size 91298 diff --git a/data/2025/2504_11xxx/2504.11468/images/bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg b/data/2025/2504_11xxx/2504.11468/images/bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg new file mode 100644 index 0000000000000000000000000000000000000000..6505ad3531f9d9031cc5dd9b35640731319cd000 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:789ce98a85faabb36426208677065ecd609c22a7f92c3672941bd99396440067 +size 21302 diff --git a/data/2025/2504_11xxx/2504.11468/images/c08d20935207087b106a4ad318993bac1745d63af6ef4cd6f9ba0f41a4bfcef2.jpg b/data/2025/2504_11xxx/2504.11468/images/c08d20935207087b106a4ad318993bac1745d63af6ef4cd6f9ba0f41a4bfcef2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..82ad19c783d8b02af8df8d3d27bac4a1a35556a3 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/c08d20935207087b106a4ad318993bac1745d63af6ef4cd6f9ba0f41a4bfcef2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8d42eb649338fd17c37cf8057debe51ebde1e52385a7515a3d9f8de2cef293f4 +size 106819 diff --git a/data/2025/2504_11xxx/2504.11468/images/c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg b/data/2025/2504_11xxx/2504.11468/images/c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg new file mode 100644 index 0000000000000000000000000000000000000000..b9a076ee2809eef5d16143d025850a721d81827a --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87cd258f537d2a194c16e13608f6878aaa182cc7f22ccae03bf854fa4d786a46 +size 4316 diff --git a/data/2025/2504_11xxx/2504.11468/images/c4de4059c8484b4b947ff0b61d8e8130da558a160cdccf18e7c38e18ad35751b.jpg b/data/2025/2504_11xxx/2504.11468/images/c4de4059c8484b4b947ff0b61d8e8130da558a160cdccf18e7c38e18ad35751b.jpg new file mode 100644 index 0000000000000000000000000000000000000000..104fa1babcebbb2f183e631ce862daa2d4b34b63 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/c4de4059c8484b4b947ff0b61d8e8130da558a160cdccf18e7c38e18ad35751b.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d4bac579d6338a18cf84c533c4b7142096accebfa7123e643330e70e5207fb69 +size 1022 diff --git a/data/2025/2504_11xxx/2504.11468/images/cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg b/data/2025/2504_11xxx/2504.11468/images/cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg new file mode 100644 index 0000000000000000000000000000000000000000..8726cfd46bb809f82b3f2ba342cc3232fcbbacec --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c3072013f78eb97f82083980af7c4e40110c662cb28ffc9e7d3e31f6a02eec1 +size 88265 diff --git a/data/2025/2504_11xxx/2504.11468/images/cca602ec43b18134f3175daecd42d1ae16cc43133bac303a8f4fa0ebe55ecdc5.jpg b/data/2025/2504_11xxx/2504.11468/images/cca602ec43b18134f3175daecd42d1ae16cc43133bac303a8f4fa0ebe55ecdc5.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3a923e1bb7272bf0ecfc0fe75638af85459d9e27 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/cca602ec43b18134f3175daecd42d1ae16cc43133bac303a8f4fa0ebe55ecdc5.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c19f56fcbe61ceba1db8cd3e6e15c35af08ef63634dfdc02cd4c96acb0a6442 +size 909 diff --git a/data/2025/2504_11xxx/2504.11468/images/dd59ae7b4185026c2c834d34c2cf03718420107fd48be491a7e1210a462f3cb7.jpg b/data/2025/2504_11xxx/2504.11468/images/dd59ae7b4185026c2c834d34c2cf03718420107fd48be491a7e1210a462f3cb7.jpg new file mode 100644 index 0000000000000000000000000000000000000000..20c4648fe9fdaf8c90950f311a8bf1c673ca7ed8 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/dd59ae7b4185026c2c834d34c2cf03718420107fd48be491a7e1210a462f3cb7.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fde4aa857f93d315fd40b71651385da0dddef27322742a28814f9c7379eaab2b +size 1776 diff --git a/data/2025/2504_11xxx/2504.11468/images/dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg b/data/2025/2504_11xxx/2504.11468/images/dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg new file mode 100644 index 0000000000000000000000000000000000000000..791a5f8c38e0d78f25f898d91605d459769ff574 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6527261b8e2dbd5a51322c9ecac5e9be9c3fadc66a4d2ace7965c4699519a40 +size 1484 diff --git a/data/2025/2504_11xxx/2504.11468/images/e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg b/data/2025/2504_11xxx/2504.11468/images/e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg new file mode 100644 index 0000000000000000000000000000000000000000..4788774370fcc3cdf51276e1b2f8ce6bd88cc300 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f9684a49bfc101e2ad52e09e158a2150d880e4152f53ce23c0237ec86be2ab2f +size 1311 diff --git a/data/2025/2504_11xxx/2504.11468/images/e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg b/data/2025/2504_11xxx/2504.11468/images/e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0fa2d64e2bf8b09a54dff01cc88d028cfc52f363 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd846482f2424e28ef02c37f22e62f56a0fb1c3f42dadec5f0453b9fc3e2f0bb +size 69443 diff --git a/data/2025/2504_11xxx/2504.11468/images/e8530a9c6da5af3e988a414742cd8587bc85f843c30bd1bceca356014eb617c9.jpg b/data/2025/2504_11xxx/2504.11468/images/e8530a9c6da5af3e988a414742cd8587bc85f843c30bd1bceca356014eb617c9.jpg new file mode 100644 index 0000000000000000000000000000000000000000..784f870a2bf6e15ff6c22a5fac6a76756e23b155 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/e8530a9c6da5af3e988a414742cd8587bc85f843c30bd1bceca356014eb617c9.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:834a29245724b22e3bdac9d0fd2169ed8b7a2fd25665f5f9be173a565f66d791 +size 1164 diff --git a/data/2025/2504_11xxx/2504.11468/images/e8e1403eae989cbf4a0219529fa431718b17726f5ba7bd3629a233bacd5aeb95.jpg b/data/2025/2504_11xxx/2504.11468/images/e8e1403eae989cbf4a0219529fa431718b17726f5ba7bd3629a233bacd5aeb95.jpg new file mode 100644 index 0000000000000000000000000000000000000000..a170d0b6a0f3e696bfa6f8f9b42ec7569c3c61c5 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/e8e1403eae989cbf4a0219529fa431718b17726f5ba7bd3629a233bacd5aeb95.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ccab6a5f1f8c8ae05ae883036c6af23946f21b34b69d143655e79381aeadf1c +size 2993 diff --git a/data/2025/2504_11xxx/2504.11468/images/ee231119f8f93c50666fd2cd9ed1b81e1491b70da62821b8ecbdfbada8a0ed75.jpg b/data/2025/2504_11xxx/2504.11468/images/ee231119f8f93c50666fd2cd9ed1b81e1491b70da62821b8ecbdfbada8a0ed75.jpg new file mode 100644 index 0000000000000000000000000000000000000000..3ffcdd96f3458561caa28854a04b2915e7aecab2 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/ee231119f8f93c50666fd2cd9ed1b81e1491b70da62821b8ecbdfbada8a0ed75.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c76033a0a8658f58de931896b26a947ddc6f2c97e3767bd89c8071967fbd26f4 +size 14140 diff --git a/data/2025/2504_11xxx/2504.11468/images/f2d83561694c053e82b4b21f084a0f4b302530cd97be792b59857fb009114238.jpg b/data/2025/2504_11xxx/2504.11468/images/f2d83561694c053e82b4b21f084a0f4b302530cd97be792b59857fb009114238.jpg new file mode 100644 index 0000000000000000000000000000000000000000..866fa74e0e8413bf4dec00d32c03ae592b2970f7 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/f2d83561694c053e82b4b21f084a0f4b302530cd97be792b59857fb009114238.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d1d9d7710858c1fcd8769d1d52b688014dc850dbb0bc07e79dffb30d84a11e8d +size 1333 diff --git a/data/2025/2504_11xxx/2504.11468/images/f30db0d64e4d6ced1aa0a277fc0c8af60ae67502949f09a48fdfe89e2eef9340.jpg b/data/2025/2504_11xxx/2504.11468/images/f30db0d64e4d6ced1aa0a277fc0c8af60ae67502949f09a48fdfe89e2eef9340.jpg new file mode 100644 index 0000000000000000000000000000000000000000..eb235be4d37f7b15d0e14d2c6b16dc23369388e0 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/f30db0d64e4d6ced1aa0a277fc0c8af60ae67502949f09a48fdfe89e2eef9340.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6194ee087d0ed986addb4f76aac25a3745861ca95bdc532adf5ce28f3efe6f83 +size 1481 diff --git a/data/2025/2504_11xxx/2504.11468/images/fbcd3c1d1d768a476f90da661ac38e3a5c8b1c548ddc87d42e4ac32f3fdba4d1.jpg b/data/2025/2504_11xxx/2504.11468/images/fbcd3c1d1d768a476f90da661ac38e3a5c8b1c548ddc87d42e4ac32f3fdba4d1.jpg new file mode 100644 index 0000000000000000000000000000000000000000..09336613727eb644295b7055cfcf72f99abf7f28 --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/fbcd3c1d1d768a476f90da661ac38e3a5c8b1c548ddc87d42e4ac32f3fdba4d1.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a560230194062a8d06533cbdfd52b0bb9a9096cb82b1af7b141abdd8fdbd7a02 +size 1358 diff --git a/data/2025/2504_11xxx/2504.11468/images/fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg b/data/2025/2504_11xxx/2504.11468/images/fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e8d94efba9daacb03b600951b3e2126eecb926ca --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/images/fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:566c68cd143a2e24d9bb92504ed1e03fe4878d7c5adb045d6596d5ec425cc0dd +size 21721 diff --git a/data/2025/2504_11xxx/2504.11468/layout.json b/data/2025/2504_11xxx/2504.11468/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..fc92a6688047c79d3ea4616615df7a5d8b7094bc --- /dev/null +++ b/data/2025/2504_11xxx/2504.11468/layout.json @@ -0,0 +1,20045 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 78, + 479, + 113 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 78, + 479, + 113 + ], + "spans": [ + { + "bbox": [ + 105, + 78, + 479, + 113 + ], + "type": "text", + "content": "SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "spans": [ + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": "Hardy Chen" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{2*}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Haoqin Tu" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Fali Wang" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Hui Liu" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Xianfeng Tang" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Xinya Du" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Yuyin Zhou" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "text", + "content": ", Cihang Xie" + }, + { + "bbox": [ + 110, + 131, + 483, + 160 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 161, + 425, + 187 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 111, + 161, + 425, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 161, + 425, + 175 + ], + "spans": [ + { + "bbox": [ + 111, + 161, + 425, + 175 + ], + "type": "text", + "content": "1 University of California, Santa Cruz 2 University of Texas at Dallas" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 175, + 372, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 175, + 372, + 187 + ], + "spans": [ + { + "bbox": [ + 111, + 175, + 372, + 187 + ], + "type": "text", + "content": "3 The Pennsylvania State University 4 Amazon Research" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 139, + 197, + 461, + 247 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 139, + 197, + 399, + 210 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 197, + 399, + 210 + ], + "spans": [ + { + "bbox": [ + 139, + 197, + 399, + 210 + ], + "type": "text", + "content": "Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 141, + 211, + 461, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 211, + 461, + 222 + ], + "spans": [ + { + "bbox": [ + 141, + 211, + 461, + 222 + ], + "type": "text", + "content": "7B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 141, + 224, + 461, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 224, + 461, + 234 + ], + "spans": [ + { + "bbox": [ + 141, + 224, + 461, + 234 + ], + "type": "text", + "content": "3B Model: https://huggingface.co/UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-3B" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 141, + 236, + 438, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 236, + 438, + 247 + ], + "spans": [ + { + "bbox": [ + 141, + 236, + 438, + 247 + ], + "type": "text", + "content": "Dataset: https://huggingface.co/datasets/UCSC-VLAA/VLAA-Thinkin" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 281, + 274, + 330, + 288 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 274, + 330, + 288 + ], + "spans": [ + { + "bbox": [ + 281, + 274, + 330, + 288 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 140, + 301, + 471, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 301, + 471, + 558 + ], + "spans": [ + { + "bbox": [ + 140, + 301, + 471, + 558 + ], + "type": "text", + "content": "This work revisits the dominant supervised fine-tuning (SFT) then reinforcement learning (RL) paradigm for training Large Vision-Language Models (LVLMs), and reveals a key finding: SFT can significantly undermine subsequent RL by inducing \"pseudo reasoning paths\" imitated from expert models. While these paths may resemble the native reasoning paths of RL models, they often involve prolonged, hesitant, less informative steps, and incorrect reasoning. To systematically study this effect, we introduce VLAA-Thinking, a new multimodal dataset designed to support reasoning in LVLMs. Constructed via a six-step pipeline involving captioning, reasoning distillation, answer rewrite and verification, VLAA-Thinkings comprises high-quality, step-by-step visual reasoning traces for SFT, along with a more challenging RL split from the same data source. Using this dataset, we conduct extensive experiments comparing SFT, RL and their combinations. Results show that while SFT helps models learn reasoning formats, it often locks aligned models into imitative, rigid reasoning modes that impede further learning. In contrast, building on the Group Relative Policy Optimization (GRPO) with a novel mixed reward module integrating both perception and cognition signals, our RL approach fosters more genuine, adaptive reasoning behavior. Notably, our model VLAA-Thinker, based on Qwen2.5VL 3B, achieves top-1 performance on Open LMM Reasoning Leaderboard1 among 4B scale LVLMs, surpassing the previous state-of-the-art by " + }, + { + "bbox": [ + 140, + 301, + 471, + 558 + ], + "type": "inline_equation", + "content": "1.8\\%" + }, + { + "bbox": [ + 140, + 301, + 471, + 558 + ], + "type": "text", + "content": ". We hope our findings provide valuable insights in developing reasoning-capable LVLMs and can inform future research in this area." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 590, + 212, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 590, + 212, + 604 + ], + "spans": [ + { + "bbox": [ + 105, + 590, + 212, + 604 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 620, + 506, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 620, + 506, + 700 + ], + "spans": [ + { + "bbox": [ + 104, + 620, + 506, + 700 + ], + "type": "text", + "content": "Large Language Models (LLMs) with strong reasoning capability have recently gained wide attention with the emergence of OpenAI's o1/o3 and Deepseek-R1 (Guo et al., 2025; Jaech et al., 2024). A common practice to empower models with reasoning abilities comprises two steps: supervised fine-tuning (SFT) on reasoning data, followed by reinforcement learning (RL) to further boost performance. This successful paradigm has inspired efforts to extend these strengths beyond textual domains to Large Vision-Language Models (LVLMs) (Peng et al., 2025; Chen et al., 2025a; Deng et al., 2025b; Shen et al., 2025; Yang et al., 2025b)." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 14, + 210, + 37, + 560 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 210, + 37, + 560 + ], + "spans": [ + { + "bbox": [ + 14, + 210, + 37, + 560 + ], + "type": "text", + "content": "arXiv:2504.11468v1 [cs.CL] 10 Apr 2025" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 119, + 710, + 203, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 710, + 203, + 721 + ], + "spans": [ + { + "bbox": [ + 119, + 710, + 203, + 721 + ], + "type": "text", + "content": "*Equal contribution." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 119, + 721, + 446, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 721, + 446, + 731 + ], + "spans": [ + { + "bbox": [ + 119, + 721, + 446, + 731 + ], + "type": "text", + "content": "1https://huggingface.co/spaces/opencompass/Open_LMM_Reasoning_Leaderboard" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 61, + 504, + 208 + ], + "blocks": [ + { + "bbox": [ + 106, + 61, + 504, + 208 + ], + "lines": [ + { + "bbox": [ + 106, + 61, + 504, + 208 + ], + "spans": [ + { + "bbox": [ + 106, + 61, + 504, + 208 + ], + "type": "image", + "image_path": "26a82980fa36ddddb7bc9fae55f5e05aa8597f3be6cb682495bfb84ebe497bc9.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 219, + 506, + 280 + ], + "lines": [ + { + "bbox": [ + 104, + 219, + 506, + 280 + ], + "spans": [ + { + "bbox": [ + 104, + 219, + 506, + 280 + ], + "type": "text", + "content": "Figure 1: Examples from LVLMs trained with different strategies for reasoning Left: response from a model trained with SFT, showing pseudo reasoning traces and a number of pseudo self-reflective cues (i.e., aha-moments) imitated from R1. Right: response from a model trained with RL, showing native reasoning ability and authentic aha-moments emerged from RL training. Wrong reasoning steps are colored red and aha-moments are highlighted." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 302, + 506, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 302, + 506, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 302, + 506, + 491 + ], + "type": "text", + "content": "In this work, we take a step further and examine whether the widely adopted \"SFT then RL\" paradigm similarly benefits the development of reasoning-capable LVLMs. Specifically, we ask: 1) What are the distinct effects of SFT and RL in multimodal reasoning? and 2) Is this two-stage paradigm truly necessary for reasoning in LVLMs? To systematically explore these questions, we curate VLAA-Thinkinq, the first comprehensive and high-quality image-text reasoning dataset explicitly designed to support both SFT and RL. Unlike prior datasets, VLAA-Thinkinq includes detailed, step-by-step reasoning traces derived from the R1-style \"think-then-speak\" intermediate reasoning. We construct a dedicated SFT split featuring multimodal chain-of-thought (CoT) examples suitable for visual instruction tuning, alongside a more challenging RL split curated from the same source encourage deeper and more adaptive reasoning behaviors. To effectively transfer reasoning capabilities from text-only models to the multimodal domain, we construct our dataset through a six-stage pipeline: metadata collection, image captioning, R1-based distillation, answer rewriting, verification, and split curation. Specifically, we input image captions and visual questions into DeepSeek-R1 to generate initial reasoning traces. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier, resulting in high-quality multimodal reasoning dataset for SFT and RL." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 495, + 506, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 506, + 639 + ], + "type": "text", + "content": "Next, we carefully ablate the role of SFT, RL and their combinations in multimodal reasoning using our VLAA-Thinking dataset. To better understand the role of SFT, we perform a detailed analysis, systematically examining the impact of SFT data type (e.g., with and without the self-reflective \"aha moments\"), dataset scale, and model capacity. To explore the potential of RL in the vision-language context, we design a novel mixed reward function within the Group Relative Policy Optimization (GRPO) (Shao et al., 2024) framework that involves both perception and cognition rewards to incentivize the model to produce well-reasoned answers. Specifically, our mixed reward signal blends 2 types of reward with 5 types of functions. For rule-based questions, there are functions for digit, multiple-choice, math and bounding box outputs. For open-ended questions, we adopt a competent reward model, XComposer-2.5-RM (Zang et al., 2025), along with a reference-based reward method to score an answer. We then closely investigate the effects of different reward functions, base models, and the interaction between SFT and GRPO to further optimize reasoning capabilities." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "Our extensive experiments comparing SFT and RL reveal several noteworthy insights. First, we probe the contribution of SFT and RL in multimodal reasoning: while SFT improves performance on standard tasks over the base model, it falls short in enhancing complex reasoning. Merely imitating an expert's thinking through SFT often induces \"pseudo reasoning paths\", a superficial reasoning pattern which may contain \"pseudo aha moments\" (superficial self-reflective cues), as illustrated in Figure 1. We show that these imitated reasoning patterns can hinder genuine reasoning advancement, i.e., " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "47\\%" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " relative performance drop on 7B models. This observation is also in line with recent studies highlighting the need for" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 60, + 504, + 196 + ], + "blocks": [ + { + "bbox": [ + 107, + 60, + 504, + 196 + ], + "lines": [ + { + "bbox": [ + 107, + 60, + 504, + 196 + ], + "spans": [ + { + "bbox": [ + 107, + 60, + 504, + 196 + ], + "type": "image", + "image_path": "cbb72006da68d5320a45591753be2b14afba17e28ba4dac974806df48c4a4cbc.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 204, + 504, + 251 + ], + "lines": [ + { + "bbox": [ + 104, + 204, + 504, + 251 + ], + "spans": [ + { + "bbox": [ + 104, + 204, + 504, + 251 + ], + "type": "text", + "content": "Figure 2: Data generation pipeline. We first generate initial reasoning traces by feeding detailed captions and visual questions into DeepSeek-R1. These outputs are then rewritten for improved fluency and verified for correctness using a GPT-based verifier. The resulting data is split into VLAA-Thinking-SFT and VLAA-Thinking-RL." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 271, + 504, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 271, + 504, + 317 + ], + "spans": [ + { + "bbox": [ + 104, + 271, + 504, + 317 + ], + "type": "text", + "content": "feedback and exploration signals to drive advanced reasoning behaviors (Peng et al., 2025). Additionally, our ablations show that for rule-based rewards, math and multiple-choice are more beneficial than others, and that a combination of both rule-based and open-ended rewards yields the best performance." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 320, + 506, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 320, + 506, + 443 + ], + "spans": [ + { + "bbox": [ + 104, + 320, + 506, + 443 + ], + "type": "text", + "content": "While prior work suggests that SFT followed by RL in LVLMs offers the best of both worlds (Guo et al., 2025; Yang et al., 2025b; Deng et al., 2025b)—first mimicking good reasoning format, then refining via RL feedback, we find that applying SFT before GRPO hurts performance on aligned models, with an average " + }, + { + "bbox": [ + 104, + 320, + 506, + 443 + ], + "type": "inline_equation", + "content": "12.7\\%" + }, + { + "bbox": [ + 104, + 320, + 506, + 443 + ], + "type": "text", + "content": " drop, and even a smaller scale SFT leads to a similar decline. Regarding model size, larger models cannot immune from the degeneration brought by SFT, as 7B models share almost the same performance drop with their smaller counterparts. Finally, examining the training procedure, we observe little correlation between response length, reward, and performance—SFT-ed models get higher initial rewards and longer response yet underperform RL-trained ones, contrasting with the previous observation that better models usually produce longer answers with higher RL reward (Guo et al., 2025; Peng et al., 2025)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 447, + 506, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 447, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 447, + 506, + 525 + ], + "type": "text", + "content": "To summarize, while SFT helps unaligned models follow instructions, it limits exploration during RL by promoting imitative reasoning. In contrast, learning directly from reward signals yields more effective and adaptable thinking behavior. Empirically, direct RL proves superior. Our model, VLAA-Thinker-Qwen2.5VL-3B, achieves the top-1 performance on the Open LMM Reasoning Leaderboard among 4B-scale LVLMs, surpassing the previous state-of-the-art by " + }, + { + "bbox": [ + 104, + 447, + 506, + 525 + ], + "type": "inline_equation", + "content": "1.8\\%" + }, + { + "bbox": [ + 104, + 447, + 506, + 525 + ], + "type": "text", + "content": ". Our case study further emphasizes these gains with more concise, effective reasoning traces presented in model answers." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 550, + 306, + 565 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 550, + 306, + 565 + ], + "spans": [ + { + "bbox": [ + 105, + 550, + 306, + 565 + ], + "type": "text", + "content": "2 The VLAA-Thinking Dataset" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 581, + 506, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 581, + 506, + 660 + ], + "spans": [ + { + "bbox": [ + 104, + 581, + 506, + 660 + ], + "type": "text", + "content": "To systematically evaluate the \"SFT then RL\" paradigm for developing reasoning capabilities in LVLMs, we construct VLAA-Thinking, a dataset that consists of two parts: 1) VLAA-Thinking-SFT which captures step-by-step reasoning grounded in visual inputs for SFT, and 2) VLAA-Thinking-RL which contains challenging samples designed specifically for RL. Our data generation pipeline is designed to transfer reasoning capabilities from a powerful text-only model to the multimodal domain through a structured, multi-stage process. The entire pipeline, as illustrated in Figure 2, consists of six key components:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "#1: Metadata Collection We collect metadata from 9 vision-language datasets featuring either closed- or open-ended questions. Specifically, we sample data containing unique images from CLEVR-Math (Lindström & Abraham, 2022), Math PUMA (Zhuang et al., 2024), ArxivQA (Li et al., 2024a), DocVQA (Mathew et al., 2021), VizWiz (Gurari et al., 2018), and ALLaVA (Chen et al., 2024a), and process them through our complete data pipeline. In addition, we directly adopt COCO and VisualGenome data from LLaVA-CoT (Xu et al.," + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 750, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 308, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 60, + 502, + 224 + ], + "blocks": [ + { + "bbox": [ + 108, + 60, + 502, + 224 + ], + "lines": [ + { + "bbox": [ + 108, + 60, + 502, + 224 + ], + "spans": [ + { + "bbox": [ + 108, + 60, + 502, + 224 + ], + "type": "table", + "html": "
NameData Type#Ori.#Pipeline#Final SFT#Final RL
Collected from Distilling R1
CLEVR-MathClosed-end35,00028,0185,9232,000
GeoQA170KClosed-end---6,499
Math PUMAClosed-end30,00026,67219,2586,696
ArxivQAClosed-end54,39951,34834,6041,000
DocVQAClosed-end10,1948,2064,8971,000
VizWizClosed-end20,5236,5284,2661,000
ALLaVA-LAIONOpen-end47,06618,12310,4963,000
Collected from LLaVA-CoT
COCOClosed-end3,0003,0008,7272,000
VisualGenomeClosed-end3,0003,00038,2422,000
TotalClosed- & Open-end203,182144,895126,41325,195
", + "image_path": "b3620181fc95c9bc02765935a760c35c491f17445af297296fb814b062cdd344.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 232, + 504, + 277 + ], + "lines": [ + { + "bbox": [ + 104, + 232, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 232, + 504, + 277 + ], + "type": "text", + "content": "Table 1: Data statistics of VLAA-Thinking. We present the original volume of metadata (#Ori.), the data size after the distillation pipeline (#Pipeline), the size of sampled examples for SFT (#Final SFT) and RL (#Final RL), respectively. Note that we only use GeoQA170K with verifiable answers for the RL split." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 296, + 504, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 296, + 504, + 319 + ], + "spans": [ + { + "bbox": [ + 104, + 296, + 504, + 319 + ], + "type": "text", + "content": "2024). An exception is GeoQA170K (Gao et al., 2023), which we include only in the RL split due to persistent hallucination issues during captioning. Detailed statistics are in Table 1." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 324, + 506, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 324, + 506, + 425 + ], + "spans": [ + { + "bbox": [ + 104, + 324, + 506, + 425 + ], + "type": "text", + "content": "#2: Visual Input and Additional Information Each sample begins with an image, question, and its corresponding answer. To bridge the gap between the visual modality and language reasoning, we resort to GPT-4o to generate a detailed image caption describing the content in structured and semantically rich language (detailed prompts in Appendix A.1). During this process, we take full advantage of the provided knowledge in the data beyond just the GPT captions. In detail, we provide these dataset-specific information: (1) CLEVR-Math: Instructions for synthesizing the image from CLEVR (Johnson et al., 2017); (2) Math PUMA: Textual description of math problems in the image from the dataset itself. (3) ALLaVA-LAION: Fine-grained and verified GPT-4V captions from the original dataset." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 428, + 506, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 428, + 506, + 485 + ], + "spans": [ + { + "bbox": [ + 104, + 428, + 506, + 485 + ], + "type": "text", + "content": "#3: Reasoning Answer Distillation We utilize a strong text-only reasoning model: DeepSeek-R1 to generate thinking rationale and final answers. The model is provided with the image caption, the visual question, and additional information from certain datasets. It responds using a structured reasoning format that is between and tags and contains a sequence of logical steps leading to the final answer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 489, + 506, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 489, + 506, + 556 + ], + "spans": [ + { + "bbox": [ + 104, + 489, + 506, + 556 + ], + "type": "text", + "content": "#4: Answer and Rewriting To enhance consistency and eliminate modality-specific artifacts, the raw reasoning answers generated by R1 are passed through a rewriting module (i.e., GPT-3.5-turbo (Brown et al., 2020) in our experiment). This module removes unnecessary phrases (e.g., references to \"caption\"), and ensures the answer adheres to a clean, instruction-following format based on the image. We further filter out samples with the sentence length gap larger than 15 words to ensure minimum modifications in this process." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 561, + 506, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 561, + 506, + 617 + ], + "spans": [ + { + "bbox": [ + 104, + 561, + 506, + 617 + ], + "type": "text", + "content": "#5: Automated Verification To assess whether the generated reasoning answers is correct regarding the groundtruth answer, we implement an automated verifier. This verifier compares the rewritten reasoning answer to the groundtruth of the visual question, determining whether the outputs are correct or incorrect. Only the examples that are verified as correct are retained as the final training data." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 621, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 621, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 621, + 506, + 733 + ], + "type": "text", + "content": "#6: Curating Splits for SFT and RL The last step of our data generation pipeline is to curate two non-overlapped training sets for SFT and RL, respectively. Inspired by Chu et al. (2025) which finds that RL is particularly effective in encouraging deeper reasoning on challenging cases, we aim to select more challenging samples for the RL split. To achieve this, we propose using the presence of self-reflective cues (i.e., the \"aha moments\") in the distilled answers as an indicator of a sample's difficulty level (details are in Appendix A.2). For the SFT split, we exclude samples with \"aha moments\", as such samples may be too complex to fully imitate through finetuning. On the other hand, the harder examples with \"aha moments\" form the RL split, on which reward-driven learning may be better suited to elicit meaningful reflection." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": "Following these steps, our dataset adheres to the format {image, question, reasoning, answer}, with reasoning and answer generated by DeepSeek-R1. We construct a high-quality multimodal reasoning dataset with 126,413 samples for SFT and 25,195 samples for RL." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 140, + 505, + 158 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 140, + 505, + 158 + ], + "spans": [ + { + "bbox": [ + 104, + 140, + 505, + 158 + ], + "type": "text", + "content": "3 Investigating The Role of SFT for Multimodal Reasoning" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 163, + 506, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 163, + 506, + 297 + ], + "spans": [ + { + "bbox": [ + 104, + 163, + 506, + 297 + ], + "type": "text", + "content": "SFT has become the de-facto approach for training LLMs. Recent studies aim to extend the strengths of SFT to empower LVLMs with reasoning abilities by training on specially formatted data. Unlike prior methods that incorporate standalone textual descriptions of images (Xu et al., 2024), this direct strategy enables the model to develop grammatically coherent reasoning abilities, allowing it to \"think before speak.\" In recent vision-language reasoning systems, there is a notable trend of complementing or even replacing SFT with RL to enhance complex reasoning abilities (Peng et al., 2025; Deng et al., 2025b). We follow this line and take it further by probing the underlying cause of this shift. Our finding suggests that self-reflection thinking (\"aha moments\") from the SFT process is overloaded with excessive and irrelevant reasoning, becomes what we call \"pseudo aha moments\" and ultimately hurts performance. In this section, we explore 1) the model perform when SFT-ed on data with aha-moments and 2) the effect of SFT data size to model performance." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 308, + 234, + 323 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 308, + 234, + 323 + ], + "spans": [ + { + "bbox": [ + 105, + 308, + 234, + 323 + ], + "type": "text", + "content": "3.1 Experiment Setup" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 334, + 506, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 334, + 506, + 422 + ], + "spans": [ + { + "bbox": [ + 104, + 334, + 506, + 422 + ], + "type": "text", + "content": "To investigate the effect of SFT training with aha-moments, we collect the distilled VQA pairs whose distilled answers contain aha-moments, totaling 55K samples. To study the effect of SFT with different sizes of training sets, we use perplexity (PPL) filtering to obtain a smaller SFT dataset. Specifically, we compute the PPL score of each answer in VLAA-Thinking-SFT-126K using Qwen2-VL-2B and Qwen2.5-VL-3B, and sort all samples by their average PPL scores over the two models. We keep the samples with high PPLs to obtain a total of 25K SFT samples, as these harder examples push models to learn more effectively and efficiently (Ankner et al., 2024; Li et al., 2024b)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 429, + 507, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 429, + 507, + 520 + ], + "spans": [ + { + "bbox": [ + 104, + 429, + 507, + 520 + ], + "type": "text", + "content": "We select four models for training: Qwen2VL (2B and 7B)2, Qwen2.5VL (3B and 7B). Each model is trained with a batch size of 128 and their vision encoder frozen. We evaluate model performance with VLMEvalKit (Duan et al., 2024) on 6 math reasoning benchmarks hosted in Open LMM Reasoning Leaderboard, which contains 6 challenging math reasoning benchmarks including MathVista (Lu et al., 2024), MathVision (Wang et al., 2024b), MathVerse (Zhang et al., 2024), DynaMath (Zou et al., 2024), WeMath (Qiao et al., 2024), LogicVista (Xiao et al., 2024). We present the percentage of relative performance drop of different models in Figure 3. Detailed training and evaluation setup are in Appendix B." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 531, + 185, + 547 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 531, + 185, + 547 + ], + "spans": [ + { + "bbox": [ + 105, + 531, + 185, + 547 + ], + "type": "text", + "content": "3.2 Findings" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "spans": [ + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "text", + "content": "SFT with Aha Moments Degrades Performance. We present results for the Qwen-2.5-VL-3B model trained under three different settings using our SFT data in Table 2. Somewhat unexpectedly, the model fine-tuned on 55K examples containing the aha moment performs significantly worse than the base model, with an average drop of " + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "inline_equation", + "content": "10.5\\%" + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "text", + "content": ". This suggests that chasing the aha moment through SFT is unreliable, as SFT merely teaches the model to mimic rather than to generalize genuine self-reflective reasoning. Additionally, the table shows evidence that straightforward SFT using multimodal reasoning data also degrades performance, e.g., we observe an average drop of " + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "inline_equation", + "content": "10.2\\%" + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "inline_equation", + "content": "19.1\\%" + }, + { + "bbox": [ + 104, + 556, + 355, + 712 + ], + "type": "text", + "content": " when fine-tuning on 25K and 126K samples, respectively." + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 374, + 561, + 493, + 632 + ], + "blocks": [ + { + "bbox": [ + 374, + 561, + 493, + 632 + ], + "lines": [ + { + "bbox": [ + 374, + 561, + 493, + 632 + ], + "spans": [ + { + "bbox": [ + 374, + 561, + 493, + 632 + ], + "type": "table", + "html": "
ModelAvg.
Qwen2.5-VL-3B31.8
w/ aha-55K21.3
w/ 25K21.6
w/ 126K12.7
", + "image_path": "ee231119f8f93c50666fd2cd9ed1b81e1491b70da62821b8ecbdfbada8a0ed75.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 359, + 639, + 506, + 700 + ], + "lines": [ + { + "bbox": [ + 359, + 639, + 506, + 700 + ], + "spans": [ + { + "bbox": [ + 359, + 639, + 506, + 700 + ], + "type": "text", + "content": "Table 2: Average performance over 6 reasoning benchmarks of Qwen-2.5-VL-3B SFT-ed on different sizes of SFT data and on data containing only examples with aha moment (aha-55K)." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 118, + 719, + 459, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 719, + 459, + 732 + ], + "spans": [ + { + "bbox": [ + 118, + 719, + 459, + 732 + ], + "type": "text", + "content": "2In this work, Qwen2VL-2B and Qwen2VL-7B refer to the instruction-tuned versions." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 108, + 53, + 309, + 162 + ], + "blocks": [ + { + "bbox": [ + 108, + 53, + 309, + 162 + ], + "lines": [ + { + "bbox": [ + 108, + 53, + 309, + 162 + ], + "spans": [ + { + "bbox": [ + 108, + 53, + 309, + 162 + ], + "type": "image", + "image_path": "50a178852959df63a79d3208b17d5f7213c71da1a6b922abea9170a8f72718f7.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 309, + 53, + 501, + 162 + ], + "blocks": [ + { + "bbox": [ + 309, + 53, + 501, + 162 + ], + "lines": [ + { + "bbox": [ + 309, + 53, + 501, + 162 + ], + "spans": [ + { + "bbox": [ + 309, + 53, + 501, + 162 + ], + "type": "image", + "image_path": "bc8a97c604c33c58650f3b94e86f5862f0e2c5be2e00bca00f7fcedc71d6029f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 108, + 165, + 309, + 274 + ], + "blocks": [ + { + "bbox": [ + 108, + 165, + 309, + 274 + ], + "lines": [ + { + "bbox": [ + 108, + 165, + 309, + 274 + ], + "spans": [ + { + "bbox": [ + 108, + 165, + 309, + 274 + ], + "type": "image", + "image_path": "08f5d920a8030eb842974167174a31e4f6c23bca39edd9ec34586765b0d24251.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 285, + 504, + 310 + ], + "lines": [ + { + "bbox": [ + 104, + 285, + 504, + 310 + ], + "spans": [ + { + "bbox": [ + 104, + 285, + 504, + 310 + ], + "type": "text", + "content": "Figure 3: Delta percentage performance change of different models trained with supervised fine-tuning (SFT) only." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 310, + 165, + 501, + 274 + ], + "blocks": [ + { + "bbox": [ + 310, + 165, + 501, + 274 + ], + "lines": [ + { + "bbox": [ + 310, + 165, + 501, + 274 + ], + "spans": [ + { + "bbox": [ + 310, + 165, + 501, + 274 + ], + "type": "image", + "image_path": "468ddfa3bda7713c6d236d74f4bd98d6d706433967e3328a69e6dbdfd153a29e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "spans": [ + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": "More SFT Data, Worse Performance. Counterintuitively, even a five-fold increase in the supervised dataset (from 25K to 126K instances) often fails to improve performance and in most cases actually harms it. Models trained with 126K SFT samples suffer a relative performance drop of over average " + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "inline_equation", + "content": "14\\%" + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": " compared to their 25K-trained counterparts over all model and task settings (e.g., 25K: " + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "inline_equation", + "content": "32.2\\%" + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": " vs. 126K: " + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "inline_equation", + "content": "47.0\\%" + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": "). This degradation is particularly evident on complex datasets such as WeMath and DynaMath, where the relative decrease reaches as high as " + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "inline_equation", + "content": "97.9\\%" + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": " over Qwen2.5-VL models on average. Even on mid-difficulty benchmarks like MathVision and MathVerse (i.e., model performance is relatively higher), the 126K SFT models underperform, with an average drop of " + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "inline_equation", + "content": "28.6\\%" + }, + { + "bbox": [ + 104, + 331, + 506, + 465 + ], + "type": "text", + "content": " compared to the untrained model over 4 models. These results suggest that simply scaling up SFT data does not boost generalizable reasoning skills of LLMs, and may instead suppress the model's capacity on various reasoning tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "spans": [ + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "content": "Larger Models Are Not Immune to SFT Degeneration. Contrary to expectations, scaling up model size does not mitigate the adverse effects of excessive SFT, under heavier SFT they exhibit pronounced drops on the most challenging evaluations. A larger 7B models fine-tuned on 126K examples experience drops nearly identical in magnitude to their smaller 2B or 3B counterparts: " + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "inline_equation", + "content": "47.2\\%" + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "content": " for smaller models vs. " + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "inline_equation", + "content": "45.4\\%" + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "content": " for larger models compared with base models. Notably, despite the strong performance of Qwen2.5-VL-7B model (e.g., " + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "inline_equation", + "content": "68.1\\%" + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "content": " on MathVista), it also suffers an average decline of " + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "inline_equation", + "content": "52.5\\%" + }, + { + "bbox": [ + 104, + 488, + 506, + 577 + ], + "type": "text", + "content": " on these reasoning tasks when SFT-ed with 126K data." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 581, + 506, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 581, + 506, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 581, + 506, + 628 + ], + "type": "text", + "content": "These findings highlight the limitations of SFT as a tool for enhancing multimodal reasoning. While it may be suitable for learning reasoning formats, it falls short of the expectations for fostering inherent self-reflection. Rather than simply scaling supervision data, our results suggest for a shift toward more advanced training methods like RL." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 643, + 496, + 661 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 496, + 661 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 496, + 661 + ], + "type": "text", + "content": "4 Improving Multimodal Reasoning with Mixed Rewards" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "type": "text", + "content": "The previous section shows that SFT is insufficient to transfer R1's ability to LVLMs on vision-language tasks. Therefore, it is crucial to seek for other post-training methods to elicit the reasoning ability of LVLMs. Since reinforcement learning (RL) is effective in enhancing reasoning ability (Yang et al., 2025a; Kirk et al., 2023), and GRPO has recently been proven more effective and efficient on textual math reasoning task (Shao et al., 2024; Jahn et al.," + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 108, + 83, + 500, + 208 + ], + "blocks": [ + { + "bbox": [ + 108, + 83, + 500, + 208 + ], + "lines": [ + { + "bbox": [ + 108, + 83, + 500, + 208 + ], + "spans": [ + { + "bbox": [ + 108, + 83, + 500, + 208 + ], + "type": "image", + "image_path": "e82da74faa97dc6987f3d1c29cb6286eb3c55cf5259359748465bbf3676e85b2.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 219, + 504, + 253 + ], + "lines": [ + { + "bbox": [ + 104, + 219, + 504, + 253 + ], + "spans": [ + { + "bbox": [ + 104, + 219, + 504, + 253 + ], + "type": "text", + "content": "Figure 4: The proposed Mixed Reward Module for GRPO training, comprising 2 reward formats (rule-based and open-ended) and 5 types of verifiable rewards (digit, MCQ, math, IoU and general reasoning)." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 276, + 504, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 276, + 504, + 300 + ], + "spans": [ + { + "bbox": [ + 104, + 276, + 504, + 300 + ], + "type": "text", + "content": "2025) than other methods like PPO (Schulman et al., 2017), it motivates us to apply GRPO training for vision-language reasoning tasks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "spans": [ + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "content": "Mathematically, let " + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "content": " be a query and " + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "inline_equation", + "content": "\\{o_i\\}_{i=1}^G" + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "content": " be a group of " + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "content": " sampled outputs from the old policy model " + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "inline_equation", + "content": "\\pi_{old}" + }, + { + "bbox": [ + 104, + 305, + 504, + 331 + ], + "type": "text", + "content": ", GRPO maximizes the following objective:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 335, + 501, + 363 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 335, + 501, + 363 + ], + "spans": [ + { + "bbox": [ + 104, + 335, + 501, + 363 + ], + "type": "interline_equation", + "content": "\\mathcal {J} _ {\\mathrm {G R P O}} (\\theta) = \\mathbb {E} _ {q, \\{o _ {i} \\} \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}}} \\left[ \\frac {1}{G} \\sum_ {i = 1} ^ {G} \\frac {1}{| o _ {i} |} \\sum_ {t = 1} ^ {| o _ {i} |} \\min \\left(r _ {t} (\\theta) \\hat {A} _ {i, t}, \\operatorname {c l i p} (r _ {t} (\\theta), 1 - \\epsilon , 1 + \\epsilon) \\hat {A} _ {i, t}\\right) \\right] - \\beta D _ {\\mathrm {K L}} \\left(\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}\\right)", + "image_path": "789451f86f759b63ebf7a1797214718dee7381be2657500ce0418d0db1dc11ca.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "spans": [ + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "inline_equation", + "content": "\\hat{A}_{i,t}" + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "text", + "content": " is the estimated advantage, " + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "text", + "content": " is the KL penalty coefficient and " + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "inline_equation", + "content": "\\pi_{\\theta}, \\pi_{\\theta_{\\mathrm{old}}}, \\pi_{\\mathrm{ref}}" + }, + { + "bbox": [ + 104, + 363, + 504, + 388 + ], + "type": "text", + "content": " are current, old, and reference policies, respectively." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 415, + 282, + 427 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 415, + 282, + 427 + ], + "spans": [ + { + "bbox": [ + 104, + 415, + 282, + 427 + ], + "type": "text", + "content": "4.1 GRPO with Mixed Reward" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 441, + 505, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 505, + 519 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 505, + 519 + ], + "type": "text", + "content": "To better adapt GRPO to multimodal reasoning, in addition to adopting the rule-based reward similar to the textual GRPO training, it is necessary to consider additional characteristics introduced by the vision modality. Inspired by (Fu et al., 2024) which benchmarks LVLMs by perception and cognition (reasoning), we propose a mixed reward framework for GRPO training, as illustrated in Figure 4. The reward system comprises five types of verifiable rewards with two formats, encompassing both visual perception and visual reasoning tasks." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 523, + 505, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 523, + 505, + 626 + ], + "spans": [ + { + "bbox": [ + 104, + 523, + 505, + 626 + ], + "type": "text", + "content": "Rule-Based Reward There are 4 types of rule-based rewards, including digit matching, option letter matching and math expression matching and Intersection over Union for bounding boxes. For digit matching, the model is asked to answer counting questions from CLEVR-Math whose groundtruths are a single digit. For option letter matching, the model is required to answer an MCQ. For math expression matching, the model is asked to solve a math question, such as finding a function expression or the volume of a cone, and output its answers in latex format. We use the Math Verify3 package to check for correctness. For bounding boxes, the model is prompted to output the bounding box coordinates of an object in the image, and an IoU score (range from 0 to 1) is computed as reward." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "spans": [ + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": "Open-ended Reward We leverage InternLM-XComposer2.5-Reward (Zang et al., 2025) as the scorer, denoted as " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "S_{\\theta}(\\cdot)" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": ", which takes an image and a QA pair as input, and outputs a reward score. Following Muhtar et al. (2025), the reward for a sampled response " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "\\hat{y}" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": " is computed as " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "R_{open} = 1 - \\exp(-\\left(S_{\\theta}(\\hat{y}) - S_{\\theta}(y)\\right) \\times \\beta)" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "f_{\\theta}(\\hat{y}) > f_{\\theta}(y)" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": " else 0, where " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "S_{\\theta}(y)" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": " is the score of the reference answer, and " + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 630, + 505, + 708 + ], + "type": "text", + "content": " is a smoothing hyperparameter. Note that the open-ended reward is normalized into [0,1], which is consistent with the scale of rule-based reward, partially avoiding reward hacking during training." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 118, + 719, + 315, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 719, + 315, + 731 + ], + "spans": [ + { + "bbox": [ + 118, + 719, + 315, + 731 + ], + "type": "text", + "content": "3https://github.com/huggingface/Math-Verify" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 308, + 759 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 81, + 506, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 81, + 506, + 172 + ], + "spans": [ + { + "bbox": [ + 104, + 81, + 506, + 172 + ], + "type": "text", + "content": "Implicit Format Reward Unlike Guo et al. (2025) and its subsequent works which use a separate reward term for format correctness, we discard this format reward term and make the format reward supersede all other rewards. Namely, whenever we are unable to extract a valid response from the raw answer, the reward would be 0. We empirically find that by specifying the output format in system prompt, the model is able to generate answers with correct formats through trials and errors. The implicit format reward design simplifies the reward computation. Further, it may yield better performance since less restriction is imposed on the exploration process (Zeng et al., 2025)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 192, + 312, + 207 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 192, + 312, + 207 + ], + "spans": [ + { + "bbox": [ + 104, + 192, + 312, + 207 + ], + "type": "text", + "content": "4.2 Effect of SFT on GRPO Training" + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 110, + 223, + 511, + 294 + ], + "blocks": [ + { + "bbox": [ + 110, + 223, + 511, + 294 + ], + "lines": [ + { + "bbox": [ + 110, + 223, + 511, + 294 + ], + "spans": [ + { + "bbox": [ + 110, + 223, + 511, + 294 + ], + "type": "table", + "html": "
GRPO BackboneMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
Qwen2VL-7B-Inst59.619.833.915.230.536.032.5
Qwen2VL-7B-Inst+SFT43.714.719.03.211.127.319.8(-39%)
Qwen2VL-7B-Base59.318.233.511.423.236.230.7
Qwen2VL-7B-Base+SFT49.516.425.06.420.432.725.7(-16%)
", + "image_path": "494e2a32b86da5c55a1e0d1d8d99d6176e929684cf3b7d5cc327bbe413e37432.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 302, + 504, + 335 + ], + "lines": [ + { + "bbox": [ + 104, + 302, + 504, + 335 + ], + "spans": [ + { + "bbox": [ + 104, + 302, + 504, + 335 + ], + "type": "text", + "content": "Table 3: Benchmark results of models trained with GRPO on different backbones. SFT+GRPO yields performance degradation, indicating that SFT is NOT compatible with GRPO in multimodal reasoning." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 358, + 506, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 358, + 506, + 427 + ], + "spans": [ + { + "bbox": [ + 104, + 358, + 506, + 427 + ], + "type": "text", + "content": "SFT is NOT Compatible with GRPO in Multimodal Reasoning. Although we reveal in Section 3 that SFT alone leads to a performance drop in multimodal reasoning, it is still unclear whether SFT plays a crucial role in aiding GRPO, like the golden key in DeepSeek-R1. We experiment with different backbones for GRPO training. Specifically, we adopt Qwen2VL-7B-Base and Qwen2VL-7B-Inst, and perform SFT on them with 25K samples, followed by GRPO training." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "spans": [ + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "text", + "content": "From Table 3, we observe that models undergoing SFT before GRPO training perform worse than those trained with GRPO alone, presenting an average drop of " + }, + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "inline_equation", + "content": "8.9\\%" + }, + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "text", + "content": " across Qwen2VL-Base and Qwen2VL-Inst compared to their non-SFT counterparts. We also find that SFT introduces more degradation to instruction models than to base models without instruction-following capabilities. For instance, Qwen2VL-Inst suffers a " + }, + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "inline_equation", + "content": "7.7\\%" + }, + { + "bbox": [ + 104, + 430, + 506, + 532 + ], + "type": "text", + "content": " more drop in performance than Qwen2VL-Base post-SFT, suggesting that SFT can compromise the instruction-following ability crucial for effective GRPO training. Taken together, these results suggest that SFT is currently incompatible with GRPO in the context of multimodal reasoning, impairing both base and instruction-tuned LVLMs." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 147, + 542, + 462, + 651 + ], + "blocks": [ + { + "bbox": [ + 147, + 542, + 462, + 651 + ], + "lines": [ + { + "bbox": [ + 147, + 542, + 462, + 651 + ], + "spans": [ + { + "bbox": [ + 147, + 542, + 462, + 651 + ], + "type": "image", + "image_path": "0a2902b9c361de315e237c783ff063178db359c0868a18abba7b7e6f8b5d3c04.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 660, + 504, + 685 + ], + "lines": [ + { + "bbox": [ + 104, + 660, + 504, + 685 + ], + "spans": [ + { + "bbox": [ + 104, + 660, + 504, + 685 + ], + "type": "text", + "content": "Figure 5: Impact of SFT with 5K and 10K samples before GRPO. Smaller-sized SFT datasets still jeopardizes GRPO performance." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 708, + 504, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 708, + 504, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 708, + 504, + 734 + ], + "type": "text", + "content": "Smaller SFT Dataset Still Jeopardizes GRPO Performance. Since we reveal in Section 3.2 that more SFT data yields lower performance, we try to investigate the effect of downsizing" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "content": "the SFT training set. Following the PPL filtering method in Section 3, we select top-10K and top-5K samples from VLAA-Thinking-SFT-126K to finetune Qwen2.5-VL-3B, followed by GRPO training. For comparison, we also conduct GRPO training without SFT." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "spans": [ + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "text", + "content": "We present the performance of Qwen2.5-VL-3B on each task in Figure 5. A clear observation is that applying SFT on 5K examples prior to GRPO significantly degrades performance compared to using GRPO alone, showing an average drop of " + }, + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "inline_equation", + "content": "13.5\\%" + }, + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "text", + "content": ". Moreover, scaling up SFT data to 10K yields only a marginal improvement of " + }, + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "inline_equation", + "content": "0.8\\%" + }, + { + "bbox": [ + 104, + 121, + 504, + 178 + ], + "type": "text", + "content": ". These results further support that SFT before GRPO can hinder the model's learning capability." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 116, + 187, + 304, + 278 + ], + "blocks": [ + { + "bbox": [ + 116, + 187, + 304, + 278 + ], + "lines": [ + { + "bbox": [ + 116, + 187, + 304, + 278 + ], + "spans": [ + { + "bbox": [ + 116, + 187, + 304, + 278 + ], + "type": "image", + "image_path": "fe03487cf0983066d249faec0960558ec4d35cd0b8c40253ea78650b9c538dd3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 280, + 504, + 324 + ], + "lines": [ + { + "bbox": [ + 104, + 280, + 504, + 324 + ], + "spans": [ + { + "bbox": [ + 104, + 280, + 504, + 324 + ], + "type": "text", + "content": "Figure 6: Response length (left) and reward (right) during training. Training with only GRPO yields the lowest response length and yet the highest final reward and best benchmark performance, indicating that response length, reward, and model performance are NOT necessarily related." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 304, + 187, + 492, + 279 + ], + "blocks": [ + { + "bbox": [ + 304, + 187, + 492, + 279 + ], + "lines": [ + { + "bbox": [ + 304, + 187, + 492, + 279 + ], + "spans": [ + { + "bbox": [ + 304, + 187, + 492, + 279 + ], + "type": "image", + "image_path": "3525a416ff60c0c03f616e180ccbfe5e048883553436ac72d26e02043a002f8b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "spans": [ + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "text", + "content": "Response Length, Reward, and Model Performance are NOT Necessarily Related. Prior work in RL suggests that longer responses often correlate with better reasoning and higher RL rewards (Guo et al., 2025; Zhou et al., 2025; Chen et al., 2025b). However, our findings in Figure 6 reveal that response length and reward in GRPO are not reliable indicators of reasoning ability. For instance, the 10K SFT+GRPO model produces the longest responses but ends up with lower rewards than the GRPO-only model (" + }, + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "inline_equation", + "content": "\\sim 0.35" + }, + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "inline_equation", + "content": "\\sim 0.5" + }, + { + "bbox": [ + 104, + 333, + 506, + 422 + ], + "type": "text", + "content": ") after training. Similarly, the 5K SFT+GRPO variant shows moderate length and reward but still underperforms on downstream tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "spans": [ + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "text", + "content": "Interestingly, both SFT-ed models start with higher initial rewards (e.g., " + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "inline_equation", + "content": "\\sim 0.20" + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "inline_equation", + "content": "10\\mathrm{K}" + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "text", + "content": " SFT+GRPO vs. " + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "inline_equation", + "content": "\\sim 0.05" + }, + { + "bbox": [ + 104, + 427, + 506, + 538 + ], + "type": "text", + "content": " for GRPO-only), which is likely due to their early learning experience with supervision since SFT and GRPO data share the same distribution. However, they exhibit limited reward improvement during training, whereas the GRPO-only model rapidly surpasses them. These trends further reveal that SFT solely provides a higher \"lower bound\" for RL training, yet it may lower the \"upper bound\" since the reasoning SFT data constrains the model's exploration paths. Therefore, reasoning is a native emerging ability that is more likely to be developed through RL, not SFT. While SFT-ed models may appear to reason, their behavior is closer to pattern imitation — a form of pseudo-reasoning that lacks the generalizable reasoning skills." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 559, + 293, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 559, + 293, + 574 + ], + "spans": [ + { + "bbox": [ + 104, + 559, + 293, + 574 + ], + "type": "text", + "content": "4.3 GRPO Training without SFT" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 584, + 506, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 584, + 506, + 629 + ], + "spans": [ + { + "bbox": [ + 104, + 584, + 506, + 629 + ], + "type": "text", + "content": "Following the findings in the previous section, we directly conduct GRPO training which yields four models: VLAA-Thinker-Qwen2-VL-2B, VLAA-Thinker-Qwen2-VL-7B, VLAA-Thinker-Qwen2.5-VL-3B, VLAA-Thinker-Qwen2.5-VL-7B. We also train on a base model of Qwen2-VL-7B, and the resulting model is named VLAA-Thinker-Qwen2-7B-Zero." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 634, + 504, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 634, + 504, + 690 + ], + "spans": [ + { + "bbox": [ + 104, + 634, + 504, + 690 + ], + "type": "text", + "content": "We sample 4 times for each query with temperature 0.8. Rollout and training batch size are set as 512 and 256, respectively. We train our model for 1 episode (outer loop) and 1 epoch per episode (inner loop) on " + }, + { + "bbox": [ + 104, + 634, + 504, + 690 + ], + "type": "inline_equation", + "content": "8^{*}\\mathrm{H}100" + }, + { + "bbox": [ + 104, + 634, + 504, + 690 + ], + "type": "text", + "content": " GPUs with 49 steps. More details of training setup are in Appendix C.1. We follow the identical evaluation setup as described in Section 3.1. We present evaluation results in Table 4 and list our main findings below." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": "Direct GRPO Training Boosts Model Performance. Models trained directly with GRPO on the VL-Thinking RL consistently outperform their respective base models. For example," + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 759 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 759 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 110, + 80, + 503, + 256 + ], + "blocks": [ + { + "bbox": [ + 110, + 80, + 503, + 256 + ], + "lines": [ + { + "bbox": [ + 110, + 80, + 503, + 256 + ], + "spans": [ + { + "bbox": [ + 110, + 80, + 503, + 256 + ], + "type": "table", + "html": "
ModelMathVistaMathVisionMathVerse (vision-only)DynaMath (worst)WeMathLogicVistaAvg.
4B-scale LVLMs
Qwen2-VL-2B48.016.117.53.810.826.620.5
Qwen2.5-VL-3B61.221.931.213.222.940.331.8
VLM-R1-Math-030562.721.932.213.030.040.533.4
VLAA-Thinker-Qwen2-2B43.614.819.03.412.630.420.3
VLAA-Thinker-Qwen2.5-3B61.024.436.418.233.838.535.4
7B-scale LVLMs
LLaVA-OneVision-7B58.618.319.39.020.933.326.6
InternLM-XComposer2.564.017.816.28.214.134.725.8
InternVL2.5-8B64.517.022.89.423.536.028.9
InternVL2-8B58.320.020.49.220.233.626.9
Qwen2-VL-7B61.619.225.411.022.333.328.8
Qwen2.5-VL-7B68.125.441.121.836.247.940.1
VLAA-Thinker-Qwen2-7B-Zero59.318.233.511.423.236.230.7
VLAA-Thinker-Qwen2-7B59.619.833.915.230.536.032.5
VLAA-Thinker-Qwen2.5-7B68.026.448.222.441.548.542.5
", + "image_path": "c08d20935207087b106a4ad318993bac1745d63af6ef4cd6f9ba0f41a4bfcef2.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 263, + 506, + 288 + ], + "lines": [ + { + "bbox": [ + 104, + 263, + 506, + 288 + ], + "spans": [ + { + "bbox": [ + 104, + 263, + 506, + 288 + ], + "type": "text", + "content": "Table 4: Evaluation results of 6 math reasoning benchmarks on Open LMM Leaderboard. VLAA-Thinker models significantly outperform baselines and other models." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "spans": [ + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "content": "at the 7B scale, two models trained on VL-Thinking achieve an average score of " + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "inline_equation", + "content": "36.5\\%" + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "content": ", marking a " + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "inline_equation", + "content": "2.0\\%" + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "content": " improvement over their base model average of " + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "inline_equation", + "content": "34.5\\%" + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "content": ". Moreover, our best-performing 7B model consistently outperforms other similarly sized LVLMs (e.g., InternVL2.5-8B, LLaVA-OneVision-7B), while our 3B model surpasses the recent reasoning-focused model, VLM-R1-Math, by " + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "inline_equation", + "content": "1.1\\%" + }, + { + "bbox": [ + 104, + 309, + 506, + 377 + ], + "type": "text", + "content": " on average. These results once again demonstrate that GRPO significantly enhances reasoning capabilities, even without additional SFT." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "spans": [ + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": "Stronger Instruction Model Leads to Better Post-GRPO Reasoning. An interesting observation is that model with better instruction tuning generally performs better. The instruction-aligned Qwen2-7B model, after GRPO, outperforms its unaligned counterpart VLAA-Thinker-Qwen2-7B-Zero by " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "1.8\\%" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": " on average " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "(31.3\\%" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "29.5\\%)" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": ", with notable gains on harder tasks like DynaMath " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "(5.0\\%)" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": " and WeMath " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "(3.1\\%)" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": ". Moreover, using a stronger instruction-tuned model for GRPO further improves across both 3B and 7B scales — VLAA-Thinker-Qwen2.5 surpasses VLAA-Thinker-Qwen2 by " + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "inline_equation", + "content": "12.6\\%" + }, + { + "bbox": [ + 104, + 385, + 506, + 475 + ], + "type": "text", + "content": " on average, confirming that higher-quality instruction tuning leads to more effective post-RL reasoning." + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 169, + 475, + 421, + 594 + ], + "blocks": [ + { + "bbox": [ + 169, + 475, + 421, + 594 + ], + "lines": [ + { + "bbox": [ + 169, + 475, + 421, + 594 + ], + "spans": [ + { + "bbox": [ + 169, + 475, + 421, + 594 + ], + "type": "image", + "image_path": "19f40235adb079c22c22a562dfb38d4a909739ff38f4bd7648ca83103cc54804.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 607, + 504, + 632 + ], + "lines": [ + { + "bbox": [ + 104, + 607, + 504, + 632 + ], + "spans": [ + { + "bbox": [ + 104, + 607, + 504, + 632 + ], + "type": "text", + "content": "Figure 7: Heatmap of different \"aha\" expressions generated by VLAA-Thinker models during training." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "Emergence of Authentic Aha Moments. To show that our GRPO training can induce authentic self-reflection process, we plot the frequency of four aha expressions (\"alternatively\", \"double-check\", \"i should check\", \"wait\") for each VLAA-Thinker model in Figure 7. Since all models are trained using GRPO without being SFT-ed on distilled reasoning paths, all aha moments emerge from the GRPO process, demonstrating the model's self-developed reflective ability. Another finding is that the number of aha moments is not directly correlate with overall model performance, as more aha moments do not necessarily translate to higher reasoning scores." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 80, + 189, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 80, + 189, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 80, + 189, + 94 + ], + "type": "text", + "content": "4.4 Ablations" + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 111, + 117, + 504, + 228 + ], + "blocks": [ + { + "bbox": [ + 111, + 117, + 504, + 228 + ], + "lines": [ + { + "bbox": [ + 111, + 117, + 504, + 228 + ], + "spans": [ + { + "bbox": [ + 111, + 117, + 504, + 228 + ], + "type": "table", + "html": "
RowMethodDigitMathMCQIoUOpen-endedMViMVsWM
0Qwen2.5-VL-3B21.931.222.9
1w/o Digit23.534.628.8
2w/o Math21.432.727.0
3w/o MCQ21.533.918.4
4w/o IoU22.835.330.0
5All Rule-Based22.234.930.1
6Mixed Reward24.436.433.8
", + "image_path": "77f89a6f81276be573df3fa7e88e1d19bfb7df7e4db8a9fe4dfd25d29931bcc3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "spans": [ + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "text", + "content": "Mixed Reward. To demonstrate the effectiveness of our mixed reward strategy, we perform an ablation study on Qwen2.5-VL-3B by selectively disabling individual reward components and evaluating performance across three math reasoning benchmarks, as shown in Table 5. The model trained with Mixed Reward achieves the best overall performance, with an average improvement of " + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "inline_equation", + "content": "6.2\\%" + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "text", + "content": " over the baseline, demonstrating the effectiveness of our reward design. Using only rule-based rewards (All Rule-Based) also yields consistent gains (e.g., " + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "inline_equation", + "content": "29.1\\%" + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "inline_equation", + "content": "25.3\\%" + }, + { + "bbox": [ + 104, + 283, + 506, + 384 + ], + "type": "text", + "content": " baseline), while removing specific components—especially MCQ (w/o MCQ) leads to substantial drops. These results highlight the critical role of rule-based rewards in GRPO for multimodal reasoning tasks." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "spans": [ + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": "Hyperparameters To search for better hyperparameters, we experiment with different learning rates (LR) and KL divergence settings on Qwen2.5-VL-3B. We start with a basic setting where LR anneals to zero following a cosine scheduler with no KL constraint. Results are shown in Table 6. LR1 uses a minimum learning rate of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "8e^{-7}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": " with warmup ratio " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": ", whereas LR2 uses a minimum learning rate of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "5e^{-7}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": " with warmup ratio " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "3\\%" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": ". Since LR2 performs slightly better than LR1, we compare two KL settings on top of LR2. KL1 uses an initial KL of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "1e^{-2}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": " and a target KL of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "5e^{-3}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": ", whereas KL2 uses an initial KL coefficient of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "1e^{-3}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": " and a target KL of " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "5e^{-4}" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": ". We find that introducing KL constraints significantly improves the performance on MathVerse and DynaMath by " + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "inline_equation", + "content": "1.1\\%" + }, + { + "bbox": [ + 104, + 406, + 355, + 567 + ], + "type": "text", + "content": " and" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 567, + 471, + 580 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 567, + 471, + 580 + ], + "spans": [ + { + "bbox": [ + 104, + 567, + 471, + 580 + ], + "type": "inline_equation", + "content": "3.2\\%" + }, + { + "bbox": [ + 104, + 567, + 471, + 580 + ], + "type": "text", + "content": ", respectively, and that using a smaller KL can encourage the model to evolve." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 365, + 407, + 500, + 510 + ], + "blocks": [ + { + "bbox": [ + 104, + 235, + 504, + 258 + ], + "lines": [ + { + "bbox": [ + 104, + 235, + 504, + 258 + ], + "spans": [ + { + "bbox": [ + 104, + 235, + 504, + 258 + ], + "type": "text", + "content": "Table 5: Ablation of Mixed Reward on MVi: MathVision, MVs: MathVerse and WM: WeMath. A combination of rule-based and open-ended rewards yields significant boost in performance." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 365, + 407, + 500, + 510 + ], + "lines": [ + { + "bbox": [ + 365, + 407, + 500, + 510 + ], + "spans": [ + { + "bbox": [ + 365, + 407, + 500, + 510 + ], + "type": "table", + "html": "
SettingsMVsDMLV
Basic31.715.038.5
Learning Rate
+ LR133.016.038.1
+ LR233.515.638.3
KL Coef.
+ KL134.418.837.8
+ KL235.818.639.2
", + "image_path": "85e2de31cb56e15b9925e1ed8bad0a7db2adbbbbd97433446a6f6d123f2f4fb1.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + }, + { + "bbox": [ + 359, + 517, + 506, + 550 + ], + "lines": [ + { + "bbox": [ + 359, + 517, + 506, + 550 + ], + "spans": [ + { + "bbox": [ + 359, + 517, + 506, + 550 + ], + "type": "text", + "content": "Table 6: Ablation on LR and KL Coef. on MVs: MathVerse, DM: DynaMath and LV: LogicVista." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_footnote" + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 603, + 197, + 618 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 603, + 197, + 618 + ], + "spans": [ + { + "bbox": [ + 105, + 603, + 197, + 618 + ], + "type": "text", + "content": "4.5 Case Study" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 632, + 506, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 632, + 506, + 711 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 506, + 711 + ], + "type": "text", + "content": "We provide an example showcasing the improvement of VLAA-Thinker over the original model in Appendix C.3. Qwen2.5VL-7B generates lengthy response with wrong reasoning traces. Although it outputs some self-reflective patterns like \"re-evaluate\", the final answer remains wrong. On the other hand, VLAA-Thinker-Qwen2.5VL-7B is able to reason on the right track, with only a minor mistake near the end of its thinking process. Nevertheless, the high-level idea and reasoning process is overall correct, demonstrating strong capability of solving complex reasoning tasks." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 79, + 220, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 79, + 220, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 79, + 220, + 94 + ], + "type": "text", + "content": "5 Related Work" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 114, + 506, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 114, + 506, + 293 + ], + "spans": [ + { + "bbox": [ + 104, + 114, + 506, + 293 + ], + "type": "text", + "content": "Vision-Language Reasoning Models. Recent advances in vision-language (VL) reasoning models build on the success of text-only reasoning systems like OpenAI's o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). Earlier VL methods, such as few-shot prompting and chain-of-thought (CoT), offered limited visual reasoning (Brown et al., 2020; Wei et al., 2022). Recently, LLaVA-CoT (Xu et al., 2024) adopts an SFT approach a 4-step structured outputs to enhance model's reasoning, yet lacking flexibility due to its rigid output format. More recently, newer models incorporate more natural reasoning traces and reinforcement learning. VLM-R1 (Shen et al., 2025) and R1-V (Chen et al., 2025a) align multimodal LLMs using step-by-step reasoning and policy optimization. VisualThinker-R1-Zero (Zhou et al., 2025) goes further by training a 2B model via pure RL from scratch, achieving emergent inner reasoning. LMM-R1 (Peng et al., 2025) transfers CoT skills from language to vision through staged RL. Vision-R1 (Huang et al., 2025) combines reasoning trace supervision and RL with correctness and format rewards to train a strong 7B VL reasoner. Different from these concurrent works, we propose a high-quality multimodal reasoning dataset with R1-like reasoning traces for both SFT and RL, and provide a comprehensive study on training paradigms." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 297, + 506, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 297, + 506, + 441 + ], + "spans": [ + { + "bbox": [ + 104, + 297, + 506, + 441 + ], + "type": "text", + "content": "Reward Modeling in Reinforcement Learning. Reward design plays a central role in reasoning-oriented RL. While model-based rewards offer flexibility (Kwon et al., 2023; Wang et al., 2024a; Gao et al., 2024), they are prone to reward hacking (Eisenstein et al., 2023; Chen et al., 2024b; Fu et al., 2025), making them risky for reasoning tasks. Recent VL models prefer binary correctness rewards (Huang et al., 2025; Zhou et al., 2025) for math or QA tasks, directly reinforcing accurate outputs. Others apply rule-based rewards, enforcing structured formats or logic chains (Liu et al., 2025; Deng et al., 2025a). While recent studies deploy strong reward models for enhancing LVLM reasoning, they are grounded by specific domains or simpler tasks (Muhtar et al., 2025; Tu et al., 2025). GRPO-style methods use relative ranking within output batches to guide optimization without value critics (Shao et al., 2024; Guo et al., 2025). Our Mix Reward objective combines the model-based and rule-based reward in four complex rewarding scenarios, yielding better performance than existing approaches." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 488, + 204, + 503 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 488, + 204, + 503 + ], + "spans": [ + { + "bbox": [ + 105, + 488, + 204, + 503 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 524, + 506, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 524, + 506, + 625 + ], + "spans": [ + { + "bbox": [ + 104, + 524, + 506, + 625 + ], + "type": "text", + "content": "This work provides a comparative analysis on the effectiveness of leveraging SFT or RL (more specifically, GRPO) to build LVLM with strong reasoning ability. We show by extensive experiments that distilling reasoning data and performing SFT is a deficient way to transfer reasoning ability across modalities. We then extend our dataset to GRPO training with a proposed mixed reward objective, which yields substantial improvement over the baseline models. We present several findings regarding combining SFT and GRPO and the correlation between reward, respond length, and final performance. These results indicate that reasoning is a native emerging ability acquired from RL, rather than SFT, which merely equips the model with 'pseudo-reasoning' ability." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 672, + 233, + 689 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 672, + 233, + 689 + ], + "spans": [ + { + "bbox": [ + 105, + 672, + 233, + 689 + ], + "type": "text", + "content": "Acknowledgement" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 708, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 708, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 708, + 504, + 733 + ], + "type": "text", + "content": "We thank the Microsoft Accelerate Foundation Models Research Program for supporting our computing needs." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 79, + 180, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 79, + 180, + 94 + ], + "spans": [ + { + "bbox": [ + 106, + 79, + 180, + 94 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 102, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 105, + 102, + 506, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 102, + 506, + 139 + ], + "spans": [ + { + "bbox": [ + 105, + 102, + 506, + 139 + ], + "type": "text", + "content": "Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L Leavitt, and Mansheej Paul. Perplexed by perplexity: Perplexity-based data pruning with small reference models. arXiv preprint arXiv:2405.20541, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 144, + 506, + 178 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 144, + 506, + 178 + ], + "spans": [ + { + "bbox": [ + 107, + 144, + 506, + 178 + ], + "type": "text", + "content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NeurIPS, 2020." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 184, + 506, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 184, + 506, + 228 + ], + "spans": [ + { + "bbox": [ + 105, + 184, + 506, + 228 + ], + "type": "text", + "content": "Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for lite vision-language models. arXiv preprint arXiv:2402.11684, 2024a." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 235, + 506, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 235, + 506, + 269 + ], + "spans": [ + { + "bbox": [ + 105, + 235, + 506, + 269 + ], + "type": "text", + "content": "Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025a. Accessed: 2025-02-02." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 275, + 504, + 309 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 275, + 504, + 309 + ], + "spans": [ + { + "bbox": [ + 105, + 275, + 504, + 309 + ], + "type": "text", + "content": "Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319, 2024b." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 316, + 506, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 316, + 506, + 350 + ], + "spans": [ + { + "bbox": [ + 105, + 316, + 506, + 350 + ], + "type": "text", + "content": "Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, et al. An empirical study on eliciting and improving r1-like reasoning models. arXiv preprint arXiv:2503.04548, 2025b." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 355, + 506, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 355, + 506, + 390 + ], + "spans": [ + { + "bbox": [ + 105, + 355, + 506, + 390 + ], + "type": "text", + "content": "Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161, 2025." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 396, + 504, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 396, + 504, + 430 + ], + "spans": [ + { + "bbox": [ + 105, + 396, + 504, + 430 + ], + "type": "text", + "content": "Huilin Deng, Ding Zou, Rui Ma, Hongchen Luo, Yang Cao, and Yu Kang. Boosting the generalization and reasoning of vision language models with curriculum reinforcement learning. arXiv preprint arXiv:2503.07065, 2025a." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 435, + 506, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 435, + 506, + 470 + ], + "spans": [ + { + "bbox": [ + 105, + 435, + 506, + 470 + ], + "type": "text", + "content": "Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352, 2025b." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 475, + 504, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 475, + 504, + 521 + ], + "spans": [ + { + "bbox": [ + 105, + 475, + 504, + 521 + ], + "type": "text", + "content": "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 11198-11201, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 527, + 506, + 572 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 527, + 506, + 572 + ], + "spans": [ + { + "bbox": [ + 105, + 527, + 506, + 572 + ], + "type": "text", + "content": "Jacob Eisenstein, Chirag Nagpal, Alekh Agarwal, Ahmad Beirami, Alex D'Amour, DJ Dvi-jotham, Adam Fisch, Katherine Heller, Stephen Pfohl, Deepak Ramachandran, et al. Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 578, + 506, + 622 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 578, + 506, + 622 + ], + "spans": [ + { + "bbox": [ + 105, + 578, + 506, + 622 + ], + "type": "text", + "content": "Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. URL https://arxiv.org/abs/2306.13394." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 628, + 504, + 653 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 628, + 504, + 653 + ], + "spans": [ + { + "bbox": [ + 105, + 628, + 504, + 653 + ], + "type": "text", + "content": "Jiayi Fu, Xuandong Zhao, Chengyuan Yao, Heng Wang, Qi Han, and Yanghua Xiao. Reward shaping to mitigate reward hacking in rlhf. arXiv preprint arXiv:2502.18770, 2025." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "spans": [ + { + "bbox": [ + 105, + 658, + 504, + 693 + ], + "type": "text", + "content": "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370, 2023." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 698, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 698, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 698, + 506, + 732 + ], + "type": "text", + "content": "Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, and Yi Wu. On designing effective rl reward at training time for llm reasoning. arXiv preprint arXiv:2410.15115, 2024." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 733 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 117 + ], + "type": "text", + "content": "Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 124, + 506, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 124, + 506, + 170 + ], + "spans": [ + { + "bbox": [ + 105, + 124, + 506, + 170 + ], + "type": "text", + "content": "Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3608-3617, 2018." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 177, + 506, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 177, + 506, + 212 + ], + "spans": [ + { + "bbox": [ + 105, + 177, + 506, + 212 + ], + "type": "text", + "content": "Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 219, + 504, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 219, + 504, + 255 + ], + "spans": [ + { + "bbox": [ + 105, + 219, + 504, + 255 + ], + "type": "text", + "content": "Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749, 2025." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 262, + 506, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 262, + 506, + 297 + ], + "spans": [ + { + "bbox": [ + 105, + 262, + 506, + 297 + ], + "type": "text", + "content": "Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 304, + 506, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 304, + 506, + 340 + ], + "spans": [ + { + "bbox": [ + 105, + 304, + 506, + 340 + ], + "type": "text", + "content": "Afrar Jahin, Arif Hassan Zidan, Yu Bao, Shizhe Liang, Tianming Liu, and Wei Zhang. Unveiling the mathematical reasoning in deepseek models: A comparative study of large language models. arXiv preprint arXiv:2503.10573, 2025." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 347, + 506, + 393 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 347, + 506, + 393 + ], + "spans": [ + { + "bbox": [ + 105, + 347, + 506, + 393 + ], + "type": "text", + "content": "Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2901-2910, 2017." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 400, + 506, + 436 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 400, + 506, + 436 + ], + "spans": [ + { + "bbox": [ + 105, + 400, + 506, + 436 + ], + "type": "text", + "content": "Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the effects of rlhf on llm generalisation and diversity. arXiv preprint arXiv:2310.06452, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 443, + 504, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 443, + 504, + 468 + ], + "spans": [ + { + "bbox": [ + 105, + 443, + 504, + 468 + ], + "type": "text", + "content": "Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "spans": [ + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "type": "text", + "content": "Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024a." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 517, + 506, + 552 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 517, + 506, + 552 + ], + "spans": [ + { + "bbox": [ + 105, + 517, + 506, + 552 + ], + "type": "text", + "content": "Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. arXiv preprint arXiv:2402.00530, 2024b." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 559, + 506, + 593 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 559, + 506, + 593 + ], + "spans": [ + { + "bbox": [ + 105, + 559, + 506, + 593 + ], + "type": "text", + "content": "Adam Dahlgren Lindström and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. arXiv preprint arXiv:2208.05358, 2022." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 601, + 504, + 636 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 601, + 504, + 636 + ], + "spans": [ + { + "bbox": [ + 105, + 601, + 504, + 636 + ], + "type": "text", + "content": "Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 644, + 506, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 644, + 506, + 689 + ], + "spans": [ + { + "bbox": [ + 105, + 644, + 506, + 689 + ], + "type": "text", + "content": "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR), 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 697, + 506, + 733 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 697, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 697, + 506, + 733 + ], + "type": "text", + "content": "Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 2200-2209, 2021." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Dilxat Muhtar, Enzhuo Zhang, Zhenshi Li, Feng Gu, Yanglangxing He, Pengfeng Xiao, and Xueliang Zhang. Quality-driven curation of remote sensing vision-language data via learned scoring models. arXiv preprint arXiv:2503.00743, 2025." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 506, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 506, + 159 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 506, + 159 + ], + "type": "text", + "content": "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. Lmm-r1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536, 2025." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 165, + 504, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 165, + 504, + 209 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 504, + 209 + ], + "type": "text", + "content": "Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, et al. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 217, + 504, + 242 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 217, + 504, + 242 + ], + "spans": [ + { + "bbox": [ + 107, + 217, + 504, + 242 + ], + "type": "text", + "content": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 249, + 505, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 249, + 505, + 283 + ], + "spans": [ + { + "bbox": [ + 107, + 249, + 505, + 283 + ], + "type": "text", + "content": "Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 291, + 505, + 325 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 291, + 505, + 325 + ], + "spans": [ + { + "bbox": [ + 107, + 291, + 505, + 325 + ], + "type": "text", + "content": "Haozhan Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, Ruochen Xu, and Tiancheng Zhao. Vlm-r1: A stable and generalizable r1-style large vision-language model. https://github.com/om-ai-lab/VLM-R1, 2025. Accessed: 2025-02-15." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 333, + 506, + 356 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 333, + 506, + 356 + ], + "spans": [ + { + "bbox": [ + 107, + 333, + 506, + 356 + ], + "type": "text", + "content": "Haoqin Tu, Weitao Feng, Hardy Chen, Hui Liu, Xianfeng Tang, and Cihang Xie. Vilbench: A suite for vision-language process reward modeling. arXiv preprint arXiv:2503.20271, 2025." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 364, + 506, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 364, + 506, + 398 + ], + "spans": [ + { + "bbox": [ + 107, + 364, + 506, + 398 + ], + "type": "text", + "content": "Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080, 2024a." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 406, + 505, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 406, + 505, + 450 + ], + "spans": [ + { + "bbox": [ + 107, + 406, + 505, + 450 + ], + "type": "text", + "content": "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id=QWTCxMpPA." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 458, + 504, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 458, + 504, + 491 + ], + "spans": [ + { + "bbox": [ + 107, + 458, + 504, + 491 + ], + "type": "text", + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 500, + 504, + 523 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 500, + 504, + 523 + ], + "spans": [ + { + "bbox": [ + 107, + 500, + 504, + 523 + ], + "type": "text", + "content": "Yijia Xiao, Edward Sun, Tianyu Liu, and Wei Wang. Logicvista: Multimodal llm logical reasoning benchmark in visual contexts. arXiv preprint arXiv:2407.04973, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 531, + 504, + 554 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 531, + 504, + 554 + ], + "spans": [ + { + "bbox": [ + 107, + 531, + 504, + 554 + ], + "type": "text", + "content": "Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024. URL https://arxiv.org/abs/2411.10440." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 562, + 504, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 562, + 504, + 596 + ], + "spans": [ + { + "bbox": [ + 107, + 562, + 504, + 596 + ], + "type": "text", + "content": "Haoyan Yang, Ting Hua, Shangqian Gao, Binfeng Xu, Zheng Tang, Jie Xu, Hongxia Jin, and Vijay Srinivasan. Dynamic noise preference optimization for llm self-improvement via synthetic data. arXiv preprint arXiv:2502.05400, 2025a." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 604, + 506, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 604, + 506, + 647 + ], + "spans": [ + { + "bbox": [ + 107, + 604, + 506, + 647 + ], + "type": "text", + "content": "Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615, 2025b." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 656, + 504, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 656, + 504, + 690 + ], + "spans": [ + { + "bbox": [ + 107, + 656, + 504, + 690 + ], + "type": "text", + "content": "Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, et al. Internlm-xcomposer2. 5-reward: A simple yet effective multi-modal reward model. arXiv preprint arXiv:2501.12368, 2025." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 698, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 698, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 107, + 698, + 504, + 732 + ], + "type": "text", + "content": "Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892, 2025." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 107, + 81, + 504, + 250 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 107, + 81, + 504, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 504, + 128 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 504, + 128 + ], + "type": "text", + "content": "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision, pp. 169-186. Springer, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 133, + 504, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 133, + 504, + 167 + ], + "spans": [ + { + "bbox": [ + 107, + 133, + 504, + 167 + ], + "type": "text", + "content": "Hengguang Zhou, Xinui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, and Cho-Jui Hsieh. R1-zero's\" aha moment\" in visual reasoning on a 2b non-sft model. arXiv preprint arXiv:2503.05132, 2025." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 175, + 504, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 175, + 504, + 208 + ], + "spans": [ + { + "bbox": [ + 107, + 175, + 504, + 208 + ], + "type": "text", + "content": "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 215, + 504, + 250 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 215, + 504, + 250 + ], + "spans": [ + { + "bbox": [ + 107, + 215, + 504, + 250 + ], + "type": "text", + "content": "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836, 2024." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 79, + 243, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 79, + 243, + 95 + ], + "spans": [ + { + "bbox": [ + 105, + 79, + 243, + 95 + ], + "type": "text", + "content": "A Data Generation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 111, + 180, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 111, + 180, + 126 + ], + "spans": [ + { + "bbox": [ + 105, + 111, + 180, + 126 + ], + "type": "text", + "content": "A.1 Prompt" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 137, + 504, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 137, + 504, + 160 + ], + "spans": [ + { + "bbox": [ + 104, + 137, + 504, + 160 + ], + "type": "text", + "content": "We show the prompts for captioning (Figure 8), R1 answer distillation (Figure 9), rewriting (Figure 10) and verification (Figure 11)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 87, + 171, + 181, + 185 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 171, + 181, + 185 + ], + "spans": [ + { + "bbox": [ + 87, + 171, + 181, + 185 + ], + "type": "text", + "content": "Prompt for Captioning" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 82, + 191, + 525, + 293 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 82, + 191, + 427, + 202 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 191, + 427, + 202 + ], + "spans": [ + { + "bbox": [ + 82, + 191, + 427, + 202 + ], + "type": "text", + "content": "You are a vision-language model generating a highly detailed caption of an image." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 83, + 203, + 386, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 203, + 386, + 213 + ], + "spans": [ + { + "bbox": [ + 83, + 203, + 386, + 213 + ], + "type": "text", + "content": "Summarize the environment or setting (indoor/outdoor, surroundings)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 83, + 213, + 421, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 213, + 421, + 222 + ], + "spans": [ + { + "bbox": [ + 83, + 213, + 421, + 222 + ], + "type": "text", + "content": "Describe visible objects, people, or structures (colors, shapes, textures, positions)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 83, + 222, + 525, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 222, + 525, + 232 + ], + "spans": [ + { + "bbox": [ + 83, + 222, + 525, + 232 + ], + "type": "text", + "content": "Transcribe all text verbatim. For equations, use LaTeX when appropriate but do not solve or interpret them." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 83, + 232, + 416, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 232, + 416, + 242 + ], + "spans": [ + { + "bbox": [ + 83, + 232, + 416, + 242 + ], + "type": "text", + "content": "If structured data (tables, charts) appears, use Markdown formatting for clarity." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 83, + 242, + 452, + 252 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 242, + 452, + 252 + ], + "spans": [ + { + "bbox": [ + 83, + 242, + 452, + 252 + ], + "type": "text", + "content": "Include labels, annotations, brand names, or logos, if any, otherwise don't mention them." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 83, + 252, + 407, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 252, + 407, + 262 + ], + "spans": [ + { + "bbox": [ + 83, + 252, + 407, + 262 + ], + "type": "text", + "content": "Note any visible expressions or emotional tone factually, without speculation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 83, + 262, + 342, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 262, + 342, + 272 + ], + "spans": [ + { + "bbox": [ + 83, + 262, + 342, + 272 + ], + "type": "text", + "content": "## Maintain a logical order: from overall context to finer details." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 83, + 272, + 350, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 272, + 350, + 282 + ], + "spans": [ + { + "bbox": [ + 83, + 272, + 350, + 282 + ], + "type": "text", + "content": "## Provide only the caption without extra context or commentary." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 83, + 282, + 520, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 282, + 520, + 293 + ], + "spans": [ + { + "bbox": [ + 83, + 282, + 520, + 293 + ], + "type": "text", + "content": "## Be unbiased and faithful in your description, using natural language and Markdown only where relevant." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 190, + 315, + 418, + 328 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 315, + 418, + 328 + ], + "spans": [ + { + "bbox": [ + 190, + 315, + 418, + 328 + ], + "type": "text", + "content": "Figure 8: Prompt for captioning with GPT-4-Turbo." + } + ] + } + ], + "index": 16, + "type": "text" + }, + { + "bbox": [ + 87, + 344, + 179, + 357 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 344, + 179, + 357 + ], + "spans": [ + { + "bbox": [ + 87, + 344, + 179, + 357 + ], + "type": "text", + "content": "Prompt for Distillation" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 82, + 364, + 527, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 364, + 527, + 396 + ], + "spans": [ + { + "bbox": [ + 82, + 364, + 527, + 396 + ], + "type": "text", + "content": "You have advanced visual perception abilities and can directly analyze images as if you are looking at them. You will be provided with detailed visual descriptions, but you should interpret them as if they represent your actual visual understanding rather than text-based captions." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 82, + 403, + 527, + 436 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 403, + 527, + 436 + ], + "spans": [ + { + "bbox": [ + 82, + 403, + 527, + 436 + ], + "type": "text", + "content": "Answer questions as if you are visually perceiving the scene, not reading a caption. Provide natural and confident responses about objects, relationships, and numerical or spatial reasoning. Use a descriptive, visually grounded tone, avoiding mention of text." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 82, + 443, + 527, + 476 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 443, + 527, + 476 + ], + "spans": [ + { + "bbox": [ + 82, + 443, + 527, + 476 + ], + "type": "text", + "content": "Never mention that you are reading text or captions. Infer spatial relationships, numerical properties, and logical conclusions based on the perceived \"image.\" If information is unclear, respond naturally as if there are visual limitations (e.g., 'It appears that...')." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 83, + 483, + 123, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 483, + 123, + 506 + ], + "spans": [ + { + "bbox": [ + 83, + 483, + 123, + 506 + ], + "type": "text", + "content": "Caption: {caption}" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 83, + 514, + 129, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 514, + 129, + 536 + ], + "spans": [ + { + "bbox": [ + 83, + 514, + 129, + 536 + ], + "type": "text", + "content": "Question: {question}" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 190, + 565, + 419, + 578 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 565, + 419, + 578 + ], + "spans": [ + { + "bbox": [ + 190, + 565, + 419, + 578 + ], + "type": "text", + "content": "Figure 9: Prompt for distillation with Deepseek-R1." + } + ] + } + ], + "index": 23, + "type": "text" + }, + { + "bbox": [ + 105, + 603, + 263, + 618 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 603, + 263, + 618 + ], + "spans": [ + { + "bbox": [ + 105, + 603, + 263, + 618 + ], + "type": "text", + "content": "A.2 Aha-Moment Filtering" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 104, + 628, + 506, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 628, + 506, + 664 + ], + "spans": [ + { + "bbox": [ + 104, + 628, + 506, + 664 + ], + "type": "text", + "content": "We use the following list of keywords to identify aha moments: wait, again, double-check, hmm, mistake, alternatively, check, i should confirm. All answers are matched with the logic: has_aha = any([aha in text.lower() for aha in ahas])." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 105, + 684, + 419, + 697 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 684, + 419, + 697 + ], + "spans": [ + { + "bbox": [ + 105, + 684, + 419, + 697 + ], + "type": "text", + "content": "A.3 Sample Demonstration for VLAA-Thinking-SFT-126K" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 104, + 708, + 506, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 708, + 506, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 708, + 506, + 734 + ], + "type": "text", + "content": "We show several examples from VLAA-Thinking-SFT-126K in Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18." + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 87, + 83, + 175, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 83, + 175, + 95 + ], + "spans": [ + { + "bbox": [ + 87, + 83, + 175, + 95 + ], + "type": "text", + "content": "Prompt for Rewriting" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 82, + 102, + 526, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 102, + 526, + 125 + ], + "spans": [ + { + "bbox": [ + 82, + 102, + 526, + 125 + ], + "type": "text", + "content": "You will receive a snippet of text that references a \"description\" or \"caption\" of an image. Your task is to produce a **nearly identical** version of that text with **minimal** changes, focusing on the following:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 81, + 132, + 526, + 184 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 81, + 132, + 526, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 132, + 526, + 143 + ], + "spans": [ + { + "bbox": [ + 81, + 132, + 526, + 143 + ], + "type": "text", + "content": "1. **Replace references to \"description\", \"caption\" and \"rationale\"* with wording that references *** the image.\"**" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 81, + 143, + 383, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 143, + 383, + 153 + ], + "spans": [ + { + "bbox": [ + 81, + 143, + 383, + 153 + ], + "type": "text", + "content": "- For example, \"The description says...\" could become \"The image shows...\"" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 82, + 153, + 344, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 153, + 344, + 163 + ], + "spans": [ + { + "bbox": [ + 82, + 153, + 344, + 163 + ], + "type": "text", + "content": "- \"The caption suggests...\" could become \"The image suggests...\"" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 82, + 163, + 345, + 173 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 163, + 345, + 173 + ], + "spans": [ + { + "bbox": [ + 82, + 163, + 345, + 173 + ], + "type": "text", + "content": "- \"Based on the rationale...\" could become \"Based on the image...\"" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 82, + 173, + 449, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 173, + 449, + 184 + ], + "spans": [ + { + "bbox": [ + 82, + 173, + 449, + 184 + ], + "type": "text", + "content": "- Make sure the replacement sounds natural but does " + }, + { + "bbox": [ + 82, + 173, + 449, + 184 + ], + "type": "inline_equation", + "content": "^{**}" + }, + { + "bbox": [ + 82, + 173, + 449, + 184 + ], + "type": "text", + "content": "not\\*\\* otherwise change the meaning." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 81, + 191, + 526, + 233 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 81, + 191, + 526, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 191, + 526, + 213 + ], + "spans": [ + { + "bbox": [ + 81, + 191, + 526, + 213 + ], + "type": "text", + "content": "2. **Preserve all line breaks, punctuation, and spacing** as much as possible, and make **no additional edits** outside of these replacements." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 82, + 221, + 276, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 221, + 276, + 233 + ], + "spans": [ + { + "bbox": [ + 82, + 221, + 276, + 233 + ], + "type": "text", + "content": "3. You should only output the rewritten content." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 83, + 242, + 154, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 242, + 154, + 264 + ], + "spans": [ + { + "bbox": [ + 83, + 242, + 154, + 264 + ], + "type": "text", + "content": "Here is the input: {input}" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 174, + 285, + 435, + 300 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 174, + 285, + 435, + 300 + ], + "spans": [ + { + "bbox": [ + 174, + 285, + 435, + 300 + ], + "type": "text", + "content": "Figure 10: Prompt for answer rewriting with GPT-4-Turbo." + } + ] + } + ], + "index": 13, + "type": "text" + }, + { + "bbox": [ + 87, + 313, + 180, + 326 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 87, + 313, + 180, + 326 + ], + "spans": [ + { + "bbox": [ + 87, + 313, + 180, + 326 + ], + "type": "text", + "content": "Prompt for Verification" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 82, + 333, + 179, + 343 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 333, + 179, + 343 + ], + "spans": [ + { + "bbox": [ + 82, + 333, + 179, + 343 + ], + "type": "text", + "content": "You are a fair evaluator." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 82, + 343, + 330, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 343, + 330, + 354 + ], + "spans": [ + { + "bbox": [ + 82, + 343, + 330, + 354 + ], + "type": "text", + "content": "You will be given a groundtruth and an answer from a model." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 82, + 354, + 408, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 354, + 408, + 364 + ], + "spans": [ + { + "bbox": [ + 82, + 354, + 408, + 364 + ], + "type": "text", + "content": "If the answer aligns with the groundtruth, output \"Yes\". Otherwise, output \"No\"." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 82, + 364, + 255, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 364, + 255, + 374 + ], + "spans": [ + { + "bbox": [ + 82, + 364, + 255, + 374 + ], + "type": "text", + "content": "Your output should only be \"Yes\" or \"No\"." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 83, + 383, + 137, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 383, + 137, + 404 + ], + "spans": [ + { + "bbox": [ + 83, + 383, + 137, + 404 + ], + "type": "text", + "content": "groundtruth: {gold}" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 83, + 415, + 116, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 415, + 116, + 434 + ], + "spans": [ + { + "bbox": [ + 83, + 415, + 116, + 434 + ], + "type": "text", + "content": "answer: {pred}" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 183, + 456, + 425, + 470 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 183, + 456, + 425, + 470 + ], + "spans": [ + { + "bbox": [ + 183, + 456, + 425, + 470 + ], + "type": "text", + "content": "Figure 11: Prompt for verification with GPT-3.5-Turbo." + } + ] + } + ], + "index": 21, + "type": "text" + }, + { + "bbox": [ + 105, + 488, + 312, + 505 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 488, + 312, + 505 + ], + "spans": [ + { + "bbox": [ + 105, + 488, + 312, + 505 + ], + "type": "text", + "content": "B Details of SFT Experiments" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 105, + 521, + 185, + 536 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 521, + 185, + 536 + ], + "spans": [ + { + "bbox": [ + 105, + 521, + 185, + 536 + ], + "type": "text", + "content": "B.1 Training" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 104, + 547, + 506, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 547, + 506, + 593 + ], + "spans": [ + { + "bbox": [ + 104, + 547, + 506, + 593 + ], + "type": "text", + "content": "To enhance the instruction following ability, we append task-specific instructions (i.e., MCQ, short answer) to questions. The system prompt shown in Figure 12 is used. We use a global batch size of 128. Models are trained for 190 steps on 25K samples and 985 steps on 126K samples. All experiments are run on " + }, + { + "bbox": [ + 104, + 547, + 506, + 593 + ], + "type": "inline_equation", + "content": "8^{*}\\mathrm{H}100" + }, + { + "bbox": [ + 104, + 547, + 506, + 593 + ], + "type": "text", + "content": " GPUs." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 104, + 597, + 505, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 597, + 505, + 632 + ], + "spans": [ + { + "bbox": [ + 104, + 597, + 505, + 632 + ], + "type": "text", + "content": "Interestingly, we observe loss spikes for 25K SFT training on Qwen2-VL-7B which causes model collapse. Therefore, we run the settings for multiple times until we obtain a normal loss curve, and use that checkpoint for evaluation." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 129, + 646, + 479, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 646, + 479, + 700 + ], + "spans": [ + { + "bbox": [ + 129, + 646, + 479, + 700 + ], + "type": "text", + "content": "You are VL-Thinking, a helpful assistant with excellent reasoning ability. A user asks you a question, and you should try to solve it. You should first think about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within and tags, respectively, i.e., reasoning process here answer here ." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 172, + 709, + 436, + 723 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 172, + 709, + 436, + 723 + ], + "spans": [ + { + "bbox": [ + 172, + 709, + 436, + 723 + ], + "type": "text", + "content": "Figure 12: System Prompt used for training and evaluation." + } + ] + } + ], + "index": 27, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 80, + 196, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 80, + 196, + 93 + ], + "spans": [ + { + "bbox": [ + 105, + 80, + 196, + 93 + ], + "type": "text", + "content": "B.2 Evaluation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 106, + 504, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 106, + 504, + 162 + ], + "spans": [ + { + "bbox": [ + 104, + 106, + 504, + 162 + ], + "type": "text", + "content": "We adopt VLMEvalKit (Duan et al., 2024) for all evaluation experiments. We set use(custom_prompt to False following the settings of most models in the toolkit. For higher efficiency, we set maxPixels to " + }, + { + "bbox": [ + 104, + 106, + 504, + 162 + ], + "type": "inline_equation", + "content": "256^{*}32^{*}32" + }, + { + "bbox": [ + 104, + 106, + 504, + 162 + ], + "type": "text", + "content": ", and max_new_tokens to 800. We also set system prompt as the one we used for training for a consistent training-test behavior. The other hyperparameters are default to the original toolkit." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 166, + 340, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 166, + 340, + 179 + ], + "spans": [ + { + "bbox": [ + 104, + 166, + 340, + 179 + ], + "type": "text", + "content": "We specify the split of datasets and metrics reported:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 127, + 188, + 463, + 275 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 128, + 188, + 447, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 188, + 447, + 200 + ], + "spans": [ + { + "bbox": [ + 128, + 188, + 447, + 200 + ], + "type": "text", + "content": "1. MathVista: The Test Mini split of MathVista dataset; overall accuracy." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 127, + 204, + 414, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 204, + 414, + 216 + ], + "spans": [ + { + "bbox": [ + 127, + 204, + 414, + 216 + ], + "type": "text", + "content": "2. MathVision: The Full test set of MathVision; overall accuracy." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 127, + 218, + 463, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 218, + 463, + 230 + ], + "spans": [ + { + "bbox": [ + 127, + 218, + 463, + 230 + ], + "type": "text", + "content": "3. MathVerse: The Test Mini split of MathVerse; accuracy of \"Vision Only\"." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 127, + 233, + 405, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 233, + 405, + 245 + ], + "spans": [ + { + "bbox": [ + 127, + 233, + 405, + 245 + ], + "type": "text", + "content": "4. DynaMath: The Full test set of DynaMath; overall accuracy." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 127, + 248, + 392, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 248, + 392, + 260 + ], + "spans": [ + { + "bbox": [ + 127, + 248, + 392, + 260 + ], + "type": "text", + "content": "5. WeMath: The Test Mini split of WeMath; \"Score (Strict)\"." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 127, + 263, + 403, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 263, + 403, + 275 + ], + "spans": [ + { + "bbox": [ + 127, + 263, + 403, + 275 + ], + "type": "text", + "content": "6. LogicVista: The Full test set of LogicVista; overall accuracy." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 298, + 329, + 316 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 298, + 329, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 298, + 329, + 316 + ], + "type": "text", + "content": "C Details of GRPO Experiments" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 331, + 186, + 345 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 331, + 186, + 345 + ], + "spans": [ + { + "bbox": [ + 104, + 331, + 186, + 345 + ], + "type": "text", + "content": "C.1 Training" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 356, + 504, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 356, + 504, + 402 + ], + "spans": [ + { + "bbox": [ + 104, + 356, + 504, + 402 + ], + "type": "text", + "content": "We adapt our code from OpenRLHF framework (Hu et al., 2024). To suit for our need of deploying a reward model on the same machine, we offload the reward model to CPU and only move it to GPU when performing rollouts and scoring. This design saves valuable GPU memory which accelerates the training process." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 406, + 506, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 406, + 506, + 473 + ], + "spans": [ + { + "bbox": [ + 104, + 406, + 506, + 473 + ], + "type": "text", + "content": "We also perform dataset-specific inspection and find some issues for several datasets. For example, although ArxivQA contains only MCQ, the answer format includes \"A\", \"A)\", \"(a)\", etc. And in the synthesis subset of Math PUMA, we find that some solutions only contain the value of solved unknown variables when the questions ask to output the entire function expression. We fix these issues by rule-based filtering and GPT-assisted rewriting, aiming to improve the quality of the VL-Thinking dataset." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 495, + 197, + 507 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 197, + 507 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 197, + 507 + ], + "type": "text", + "content": "C.2 Evaluation" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 520, + 443, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 520, + 443, + 533 + ], + "spans": [ + { + "bbox": [ + 104, + 520, + 443, + 533 + ], + "type": "text", + "content": "We evaluate our models with an identical setting described in Appendix B.2." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 554, + 200, + 568 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 554, + 200, + 568 + ], + "spans": [ + { + "bbox": [ + 104, + 554, + 200, + 568 + ], + "type": "text", + "content": "C.3 Case Study" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 578, + 504, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 578, + 504, + 603 + ], + "spans": [ + { + "bbox": [ + 104, + 578, + 504, + 603 + ], + "type": "text", + "content": "We present a case demonstrating the improvement of VLAA-Thinker-Qwen2.5VL-7B over its backbone in Figure 13." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 165, + 177, + 231, + 232 + ], + "blocks": [ + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "lines": [ + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "spans": [ + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": "As shown in the figure, the angle " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "O" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": " to circle " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": " at the center of triangle " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "AC" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": ". \nChoices: A: " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "36^{\\circ}" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": "; B: " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "54^{\\circ}" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": "; C: " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "60^{\\circ}" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": "; D: " + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "inline_equation", + "content": "27^{\\circ}" + }, + { + "bbox": [ + 153, + 159, + 237, + 173 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 165, + 177, + 231, + 232 + ], + "lines": [ + { + "bbox": [ + 165, + 177, + 231, + 232 + ], + "spans": [ + { + "bbox": [ + 165, + 177, + 231, + 232 + ], + "type": "image", + "image_path": "c2b0dc50ac4d0abcbe175f3c8f580538ff626eb4b3b8cd2705a27b16e943f8bd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 632, + 504, + 656 + ], + "lines": [ + { + "bbox": [ + 104, + 632, + 504, + 656 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 504, + 656 + ], + "type": "text", + "content": "Figure 13: A case from MathVerse testmini (sample index 20). Markdowns are rendered for illustration purpose. Wrong reasoning paths are colored red." + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 339, + 172, + 405, + 182 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 172, + 405, + 182 + ], + "spans": [ + { + "bbox": [ + 339, + 172, + 405, + 182 + ], + "type": "text", + "content": "Question in Image" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "spans": [ + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": "As shown in the figure, if angle " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "A = 36.0" + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": ", then angle " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": " is equal to () \nChoices: A: " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "36^{\\circ}" + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": " B: " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "54^{\\circ}" + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": " C: " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "60^{\\circ}" + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "text", + "content": " D: " + }, + { + "bbox": [ + 261, + 184, + 477, + 202 + ], + "type": "inline_equation", + "content": "27^{\\circ}" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 278, + 233, + 332, + 242 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 278, + 233, + 332, + 242 + ], + "spans": [ + { + "bbox": [ + 278, + 233, + 332, + 242 + ], + "type": "text", + "content": "Qwen2.5VL-7B" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 133, + 242, + 358, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 242, + 358, + 251 + ], + "spans": [ + { + "bbox": [ + 133, + 242, + 358, + 251 + ], + "type": "text", + "content": "To solve the problem, we need to use some properties of circles and tangents." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 133, + 258, + 233, + 266 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 258, + 233, + 266 + ], + "spans": [ + { + "bbox": [ + 133, + 258, + 233, + 266 + ], + "type": "text", + "content": "1. Identify the given information:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 137, + 267, + 241, + 282 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 137, + 267, + 241, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 267, + 241, + 274 + ], + "spans": [ + { + "bbox": [ + 137, + 267, + 241, + 274 + ], + "type": "text", + "content": "- AB is tangent to circle O at point B." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 137, + 275, + 212, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 275, + 212, + 282 + ], + "spans": [ + { + "bbox": [ + 137, + 275, + 212, + 282 + ], + "type": "text", + "content": "- The angle " + }, + { + "bbox": [ + 137, + 275, + 212, + 282 + ], + "type": "inline_equation", + "content": "\\angle AOB = 36^{\\circ}" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 133, + 283, + 225, + 291 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 283, + 225, + 291 + ], + "spans": [ + { + "bbox": [ + 133, + 283, + 225, + 291 + ], + "type": "text", + "content": "2. Understand the properties:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 137, + 291, + 473, + 307 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "spans": [ + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": "- Since " + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": " is tangent to the circle at " + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": ", the radius " + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "inline_equation", + "content": "OB" + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": " is perpendicular to the tangent " + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": ". Therefore, " + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "inline_equation", + "content": "\\angle OBA = 90" + }, + { + "bbox": [ + 137, + 291, + 473, + 299 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "spans": [ + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "text", + "content": "-In triangle " + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "inline_equation", + "content": "OAB" + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "text", + "content": " , the sum of the angles is " + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "text", + "content": " . So, we can find " + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "inline_equation", + "content": "\\angle AOB" + }, + { + "bbox": [ + 137, + 300, + 347, + 307 + ], + "type": "text", + "content": " .." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 254, + 308, + 356, + 315 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 254, + 308, + 356, + 315 + ], + "spans": [ + { + "bbox": [ + 254, + 308, + 356, + 315 + ], + "type": "inline_equation", + "content": "\\angle AOB + \\angle OBA + \\angle OAB = 180^{\\circ}" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 264, + 316, + 345, + 322 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 316, + 345, + 322 + ], + "spans": [ + { + "bbox": [ + 264, + 316, + 345, + 322 + ], + "type": "inline_equation", + "content": "36^{\\circ} + 90^{\\circ} + \\angle OAB = 180^{\\circ}" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 284, + 323, + 326, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 323, + 326, + 330 + ], + "spans": [ + { + "bbox": [ + 284, + 323, + 326, + 330 + ], + "type": "inline_equation", + "content": "\\angle OAB = 54^{\\circ}" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 133, + 331, + 233, + 338 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 331, + 233, + 338 + ], + "spans": [ + { + "bbox": [ + 133, + 331, + 233, + 338 + ], + "type": "text", + "content": "3. Determine the measure of " + }, + { + "bbox": [ + 133, + 331, + 233, + 338 + ], + "type": "inline_equation", + "content": "\\angle C" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "spans": [ + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "text", + "content": "- Since " + }, + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "inline_equation", + "content": "\\angle OAB" + }, + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "text", + "content": " is an inscribed angle that intercepts the arc BC, the measure of the central angle " + }, + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "inline_equation", + "content": "\\angle BOC" + }, + { + "bbox": [ + 133, + 339, + 454, + 348 + ], + "type": "text", + "content": " that" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 133, + 348, + 366, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 348, + 366, + 355 + ], + "spans": [ + { + "bbox": [ + 133, + 348, + 366, + 355 + ], + "type": "text", + "content": "intercepts the same arc BC is twice the measure of the inscribed angle " + }, + { + "bbox": [ + 133, + 348, + 366, + 355 + ], + "type": "inline_equation", + "content": "\\angle OAB" + }, + { + "bbox": [ + 133, + 348, + 366, + 355 + ], + "type": "text", + "content": " :" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 242, + 356, + 368, + 362 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 356, + 368, + 362 + ], + "spans": [ + { + "bbox": [ + 242, + 356, + 368, + 362 + ], + "type": "interline_equation", + "content": "\\angle B O C = 2 \\times \\angle O A B = 2 \\times 5 4 ^ {\\circ} = 1 0 8 ^ {\\circ}", + "image_path": "e8e1403eae989cbf4a0219529fa431718b17726f5ba7bd3629a233bacd5aeb95.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "spans": [ + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "text", + "content": "- Now, in triangle OBC, the sum of the angles is " + }, + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "text", + "content": ". So, we can find " + }, + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "inline_equation", + "content": "\\angle C" + }, + { + "bbox": [ + 133, + 364, + 350, + 371 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 257, + 372, + 360, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 372, + 360, + 379 + ], + "spans": [ + { + "bbox": [ + 257, + 372, + 360, + 379 + ], + "type": "inline_equation", + "content": "\\angle OBC + \\angle OCB + \\angle BOC = 180^{\\circ}" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 263, + 380, + 346, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 263, + 380, + 346, + 386 + ], + "spans": [ + { + "bbox": [ + 263, + 380, + 346, + 386 + ], + "type": "inline_equation", + "content": "90^{\\circ} + \\angle OCB + 108^{\\circ} = 180^{\\circ}" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 263, + 388, + 347, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 263, + 388, + 347, + 394 + ], + "spans": [ + { + "bbox": [ + 263, + 388, + 347, + 394 + ], + "type": "inline_equation", + "content": "\\angle OCB = 180^{\\circ} - 198^{\\circ} = 18^{\\circ}" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 132, + 396, + 433, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 396, + 433, + 403 + ], + "spans": [ + { + "bbox": [ + 132, + 396, + 433, + 403 + ], + "type": "text", + "content": "However, since " + }, + { + "bbox": [ + 132, + 396, + 433, + 403 + ], + "type": "inline_equation", + "content": "\\angle OCB" + }, + { + "bbox": [ + 132, + 396, + 433, + 403 + ], + "type": "text", + "content": " is not directly given as an option, we need to re-evaluate the problem with the co" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 133, + 403, + 475, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 403, + 475, + 419 + ], + "spans": [ + { + "bbox": [ + 133, + 403, + 475, + 419 + ], + "type": "text", + "content": "interpretation of the angle " + }, + { + "bbox": [ + 133, + 403, + 475, + 419 + ], + "type": "inline_equation", + "content": "\\angle C" + }, + { + "bbox": [ + 133, + 403, + 475, + 419 + ], + "type": "text", + "content": " which is actually the angle subtended by the arc BC at the circumference, which is half the central angle:" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 252, + 419, + 364, + 432 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 419, + 364, + 432 + ], + "spans": [ + { + "bbox": [ + 252, + 419, + 364, + 432 + ], + "type": "interline_equation", + "content": "\\angle C = \\frac {1}{2} \\times \\angle B O C = \\frac {1}{2} \\times 1 0 8 ^ {\\circ} = 5 4 ^ {\\circ}", + "image_path": "6c35054fe7662d6a569d6d46a85b0f8f0c70def9f4551853bc47d457230725cb.jpg" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "spans": [ + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "text", + "content": "Therefore, the measure of " + }, + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "inline_equation", + "content": "\\angle C" + }, + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "text", + "content": " is " + }, + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "inline_equation", + "content": "54^{\\circ}" + }, + { + "bbox": [ + 133, + 430, + 244, + 438 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 133, + 439, + 182, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 439, + 182, + 446 + ], + "spans": [ + { + "bbox": [ + 133, + 439, + 182, + 446 + ], + "type": "text", + "content": "The answer is: B" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 252, + 459, + 357, + 468 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 459, + 357, + 468 + ], + "spans": [ + { + "bbox": [ + 252, + 459, + 357, + 468 + ], + "type": "text", + "content": "VLAA-Thinker-Qwen2.5VL-7B" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 139, + 474, + 164, + 479 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 474, + 164, + 479 + ], + "spans": [ + { + "bbox": [ + 139, + 474, + 164, + 479 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 138, + 481, + 470, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 481, + 470, + 498 + ], + "spans": [ + { + "bbox": [ + 138, + 481, + 470, + 498 + ], + "type": "text", + "content": "To solve this problem, we need to use the properties of tangents and circles. Since AB is tangent to circle O at point B, the radius OB is perpendicular to the tangent AB at point B. This means that angle OBA is " + }, + { + "bbox": [ + 138, + 481, + 470, + 498 + ], + "type": "inline_equation", + "content": "90^{\\circ}" + }, + { + "bbox": [ + 138, + 481, + 470, + 498 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "spans": [ + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "text", + "content": "that angle A is " + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "inline_equation", + "content": "36^{\\circ}" + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "text", + "content": ", we can find angle AOB by subtracting angle A from " + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "text", + "content": " (since the sum of angles in triangle AOB must be " + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 138, + 505, + 470, + 521 + ], + "type": "text", + "content": "):" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 264, + 521, + 350, + 529 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 264, + 521, + 350, + 529 + ], + "spans": [ + { + "bbox": [ + 264, + 521, + 350, + 529 + ], + "type": "interline_equation", + "content": "\\angle A O B = 1 8 0 ^ {\\circ} - 3 6 ^ {\\circ} - 9 0 ^ {\\circ} = 5 4 ^ {\\circ}", + "image_path": "97da3b5e32451d061d6ec7d700ce9e3f4085b4ad25cedfee4a67dc020cc4a9f7.jpg" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 138, + 529, + 459, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 529, + 459, + 545 + ], + "spans": [ + { + "bbox": [ + 138, + 529, + 459, + 545 + ], + "type": "text", + "content": "Since angle AOB is a central angle subtending arc BC, and angle C is an angle subtending the same arc BC, the measure of angle C is half the measure of angle AOB:" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 279, + 546, + 335, + 558 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 546, + 335, + 558 + ], + "spans": [ + { + "bbox": [ + 279, + 546, + 335, + 558 + ], + "type": "interline_equation", + "content": "\\angle C = \\frac {1}{2} \\times 5 4 ^ {\\circ} = 2 7 ^ {\\circ}", + "image_path": "2c445a281743847d237a6e154962aaadda8a7ceed193d6a6525739f75619501f.jpg" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 139, + 557, + 252, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 557, + 252, + 564 + ], + "spans": [ + { + "bbox": [ + 139, + 557, + 252, + 564 + ], + "type": "text", + "content": "Therefore, the correct answer is D: " + }, + { + "bbox": [ + 139, + 557, + 252, + 564 + ], + "type": "inline_equation", + "content": "27^{\\circ}" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 139, + 566, + 166, + 572 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 566, + 166, + 572 + ], + "spans": [ + { + "bbox": [ + 139, + 566, + 166, + 572 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 139, + 574, + 170, + 580 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 574, + 170, + 580 + ], + "spans": [ + { + "bbox": [ + 139, + 574, + 170, + 580 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 140, + 582, + 159, + 588 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 582, + 159, + 588 + ], + "spans": [ + { + "bbox": [ + 140, + 582, + 159, + 588 + ], + "type": "text", + "content": "D:27°" + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 140, + 590, + 173, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 590, + 173, + 597 + ], + "spans": [ + { + "bbox": [ + 140, + 590, + 173, + 597 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 42 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 44 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 238, + 91, + 376, + 127 + ], + "blocks": [ + { + "bbox": [ + 292, + 82, + 314, + 90 + ], + "lines": [ + { + "bbox": [ + 292, + 82, + 314, + 90 + ], + "spans": [ + { + "bbox": [ + 292, + 82, + 314, + 90 + ], + "type": "text", + "content": "GeoQA" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 238, + 91, + 376, + 127 + ], + "lines": [ + { + "bbox": [ + 238, + 91, + 376, + 127 + ], + "spans": [ + { + "bbox": [ + 238, + 91, + 376, + 127 + ], + "type": "image", + "image_path": "03caa4e5e601bbd06afd1b13c53f542b2ed4f600500abbfbdc6f3c9df128a2bf.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "lines": [ + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "spans": [ + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": "Question: Given a semicircular figure with the diameter represented by AB and intersecting chords AD and BC at point P, where angle DPB measures 60.0 degrees, calculate the value of tan(angle ADC) under the condition that D denotes the midpoint of the arc BC. Choices: A " + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "inline_equation", + "content": "\\frac{1}{2}" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": ", B " + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "inline_equation", + "content": "\\frac{1}{3}" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": ", C " + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "inline_equation", + "content": "\\frac{\\sqrt{3}}{3}" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": ", D " + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "inline_equation", + "content": "\\sqrt{3}" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": ". \n" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "inline_equation", + "content": "\\bullet" + }, + { + "bbox": [ + 236, + 129, + 369, + 164 + ], + "type": "text", + "content": " Original Answer: C" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 238, + 738, + 422, + 750 + ], + "lines": [ + { + "bbox": [ + 238, + 738, + 422, + 750 + ], + "spans": [ + { + "bbox": [ + 238, + 738, + 422, + 750 + ], + "type": "text", + "content": "A VL-Thinking sample from GeoQA170K." + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 238, + 167, + 266, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 167, + 266, + 175 + ], + "spans": [ + { + "bbox": [ + 238, + 167, + 266, + 175 + ], + "type": "text", + "content": "VL-Thinking" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 238, + 180, + 369, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 180, + 369, + 198 + ], + "spans": [ + { + "bbox": [ + 238, + 180, + 369, + 198 + ], + "type": "text", + "content": "Okay. So I'm trying to solve this geometry problem. Let me see if I can visualize it correctly. There's a semicircle with diameter AB. Points C and D are on the curved edge, and chords AC and BD intersect at point P. Below AB, there's the center O. The angle DPB is 60 degrees, and D is the midpoint of arc BC. I need to find the value of tan " + }, + { + "bbox": [ + 238, + 180, + 369, + 198 + ], + "type": "inline_equation", + "content": "(\\angle A.D.C)" + }, + { + "bbox": [ + 238, + 180, + 369, + 198 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "spans": [ + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": "First, I should probably draw this out to get a better sense. I imagine a semicircle with " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " as the diameter. " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "O" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " is the center, so " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "AO = OB" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": ". Points C and D are on the semicircle, with " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " being the midpoint of arc BC. That means " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "BD" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " is equal to arc DC. Since it's a semicircle, the total arc from " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " is 180 degrees, so each arc is equal to one angle. The sum of these two angles is " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "2\\pi" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": ", so " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " is equal to " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "CD" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": ". This is the exact measure of the semicircle. So if BC is some arc, " + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 238, + 201, + 370, + 221 + ], + "type": "text", + "content": " is halfway along it. But it isn't sure about the exact measure yet." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "spans": [ + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "content": "Chords " + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "inline_equation", + "content": "\\mathrm{AC}" + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "inline_equation", + "content": "\\mathrm{DB}" + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "content": " intersect at " + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "content": ", and angle DPB is 60 degrees. I remember that when two chords intersect, the products of the segments are equal. So " + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "inline_equation", + "content": "A'P = PC = B'P" + }, + { + "bbox": [ + 238, + 224, + 370, + 237 + ], + "type": "text", + "content": ". But I'm not sure if that's directly useful." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 238, + 240, + 370, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 240, + 370, + 257 + ], + "spans": [ + { + "bbox": [ + 238, + 240, + 370, + 257 + ], + "type": "text", + "content": "Since " + }, + { + "bbox": [ + 238, + 240, + 370, + 257 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 238, + 240, + 370, + 257 + ], + "type": "text", + "content": " is the midpoint of arc BC, it might have some special properties. In a semicircle, the midpoint of an arc would be equidistant from B and C. Also, angles subtended by the same arc are equal. Maybe I can use some properties of cyclic quadrilaterals here, but since it's a semicircle, all points lie on the circumference, so ABCD is a cyclic quadrilateral." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 238, + 260, + 367, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 260, + 367, + 272 + ], + "spans": [ + { + "bbox": [ + 238, + 260, + 367, + 272 + ], + "type": "text", + "content": "Wait, but AICD is a quadrilateral inscribed in a semicircle, which makes it a cyclic quadrilateral. In such cases, opposite angles sum to 180 degrees. But I'm not sure if that's applicable here because we're dealing with a circle." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 238, + 275, + 369, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 275, + 369, + 292 + ], + "spans": [ + { + "bbox": [ + 238, + 275, + 369, + 292 + ], + "type": "text", + "content": "Let me think about the angles. Angle DPB is 60 degrees. Since " + }, + { + "bbox": [ + 238, + 275, + 369, + 292 + ], + "type": "inline_equation", + "content": "\\mathrm{P}" + }, + { + "bbox": [ + 238, + 275, + 369, + 292 + ], + "type": "text", + "content": " is the intersection of chords AC and BD, maybe I can use the property that the angle between two chords is equal to half the sum of the measures of the intercepted arcs. So angle DPB would be half the sum of arcs DB and AC. But I'm not sure about the exact arcs involved." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "spans": [ + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "content": "Alternatively, maybe I can use triangle properties. Since " + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "inline_equation", + "content": "\\mathbb{E}" + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "content": " is the midpoint of arc " + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "inline_equation", + "content": "BC" + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "content": ", triangle " + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "inline_equation", + "content": "DBC" + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "content": " might be a triangle in the sense that it has a right triangle and a left triangle in the sense of a semicircle as a right triangle. So angle " + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "inline_equation", + "content": "BDC" + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "text", + "content": " to degree " + }, + { + "bbox": [ + 238, + 295, + 370, + 308 + ], + "type": "inline_equation", + "content": "90^{\\circ}" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 238, + 311, + 365, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 311, + 365, + 319 + ], + "spans": [ + { + "bbox": [ + 238, + 311, + 365, + 319 + ], + "type": "text", + "content": "Wait, that's an important point. Since D is on the semicircle, triangle BDC is right-angled at D. So angle BDC is 90 degrees. That might help in finding angle ADC." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "spans": [ + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "text", + "content": "1. need to find " + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "inline_equation", + "content": "\\tan (\\angle DC)" + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "text", + "content": ". Let's denote angle ADC as " + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "text", + "content": ". So " + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "inline_equation", + "content": "\\tan (\\theta)^{\\circ}" + }, + { + "bbox": [ + 238, + 323, + 370, + 332 + ], + "type": "text", + "content": " opposite/adjacent. To find this, I might need to find the lengths of the sides of triangle ADC or use some trigonometric identities." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "spans": [ + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": "Let me consider the coordinates approach. Let's place the semicircle on a coordinate system with center " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "O" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": " at " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "(0,0)" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "AB" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": " as the diameter along the " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": "-axis from " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "(-r,0)" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "(r,0)" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": " is the radius. Then point " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": ", being the midpoint of arc BC, would be somewhere on the semicircle. But without knowing the exact position of " + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 238, + 335, + 370, + 352 + ], + "type": "text", + "content": ", this might be tricky." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "spans": [ + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": "Alternatively, maybe I can use power of a point. Since " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": " is the intersection of chords " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "\\mathrm{AC}" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "\\mathrm{BD}" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": ", the power of point " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": " with respect to the circle is equal for both chords. So " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "PA:PC = PB:PD" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": ". But that without specific choice of " + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 238, + 355, + 370, + 367 + ], + "type": "text", + "content": " would be a contradiction." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "spans": [ + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": "Wait, maybe I can assign some variables. Let's let the radius of the semicircle be " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{r}" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ". Then " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{AB} = 2\\pi" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ". Let me assume " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "r = 1" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " for simplicity, so " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{AB} = 2\\pi" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ". Then " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{O}" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " is at " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "(0,0)" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{A}" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " is at " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "(-1,0)" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\mathrm{B}" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " is at " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "(1,0)" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": ". Let me place point C somewhere on the semicircle, say at an angle " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " from the " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": "-axis. Then point D, being the midpoint of arc BC, would be at an angle " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "\\frac{\\theta}{2} + 180^{\\circ}" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": " from the " + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 238, + 371, + 370, + 389 + ], + "type": "text", + "content": "-axis, but I'm not sure." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 238, + 395, + 371, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 395, + 371, + 407 + ], + "spans": [ + { + "bbox": [ + 238, + 395, + 371, + 407 + ], + "type": "text", + "content": "Alternatively, since " + }, + { + "bbox": [ + 238, + 395, + 371, + 407 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 238, + 395, + 371, + 407 + ], + "type": "text", + "content": " is the midpoint of arc BC, the arc BOD equals DC. So if I consider the central angles, angle BOD equals angle DOC. Since the total arc BC is some measure, say 2σ, then each arc BOD and its corresponding arc BOD are equal." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "spans": [ + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "text", + "content": "Wait, maybe I can use the fact that angle DPB is 60 degrees. Using the intersecting chords theorem, angle DPB is equal to half the sum of the measures of arcs DB and AC. So angle DPB = " + }, + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "inline_equation", + "content": "\\frac{1}{2} (\\angle ABC + \\angle AC) = 60" + }, + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "text", + "content": " degrees. Therefore, any " + }, + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "inline_equation", + "content": "\\angle DB + \\angle AC = 170^{\\circ}" + }, + { + "bbox": [ + 238, + 411, + 371, + 426 + ], + "type": "text", + "content": " degrees." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 238, + 430, + 367, + 442 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 430, + 367, + 442 + ], + "spans": [ + { + "bbox": [ + 238, + 430, + 367, + 442 + ], + "type": "text", + "content": "But I'm not sure about the measure of arc AC. Alternatively, maybe I can express arc AC in terms of other arcs. Since the total semicircle is 180 degrees, Arc AB is 180 degrees. Arc AC is part of that, so arc AB is a part of the semicircle." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "spans": [ + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "content": "Alternatively, since D is the midpoint of arc BC, " + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "inline_equation", + "content": "\\mathrm{BC} = \\mathrm{BD}" + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "content": " are " + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "inline_equation", + "content": "\\mathrm{AC} = \\mathrm{DC}" + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "content": ". Let's denote arc " + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "inline_equation", + "content": "B D = a\\angle DC = x" + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "content": ". Then " + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "inline_equation", + "content": "a\\angle B C = 2x" + }, + { + "bbox": [ + 238, + 446, + 371, + 459 + ], + "type": "text", + "content": ". Since the total semicircle is 180 degrees, are AB = 180 degrees, so are AC = 180 degrees." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "spans": [ + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": "Now, from the intersecting chords theorem, angle " + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "inline_equation", + "content": "\\mathrm{DPB} = \\frac{1}{2} (\\arctan DB + \\arctan AC) = 60" + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": " degrees. So " + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "inline_equation", + "content": "\\frac{1}{2} (x + (180 - 2x)) = 60" + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": ". Simplifying, " + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "inline_equation", + "content": "\\frac{1}{2} (180 - x) = 60" + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": ". Multiply both sides by " + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "inline_equation", + "content": "2 \\div 180 - x = 120" + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": ". Therefore, " + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "inline_equation", + "content": "x = 60" + }, + { + "bbox": [ + 238, + 462, + 361, + 481 + ], + "type": "text", + "content": " degrees." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 238, + 483, + 371, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 483, + 371, + 499 + ], + "spans": [ + { + "bbox": [ + 238, + 483, + 371, + 499 + ], + "type": "text", + "content": "So are BD = arc DC = 60 degrees, and arc BC = 120 degrees. Then arc AC = 180 - 120 = 60 degrees. Now, I need to find tangent angle ADC. Let's consider triangle ADC. Since D is on the semicircle, angle ADC is an inscribed angle. The measure of angle ADC is half the measure of arc AC. Since arc AC is 60 degrees, angle ADC = 30 degrees." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "spans": [ + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "content": "Therefore, " + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "inline_equation", + "content": "\\tan (\\text{angle ADC}) = \\tan (30\\text{ degrees}) = \\frac{1}{\\sqrt{3}} = \\frac{\\sqrt{3}}{3}" + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "content": ". \nWait, but let me double-check. If angle ADC is 30 degrees, then " + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "inline_equation", + "content": "\\tan (30)" + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "content": " is indeed " + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "inline_equation", + "content": "\\frac{\\sqrt{3}}{3}" + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "content": ". So the answer should be option C: " + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "inline_equation", + "content": "\\frac{\\sqrt{3}}{3}" + }, + { + "bbox": [ + 238, + 502, + 370, + 529 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 238, + 534, + 285, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 534, + 285, + 542 + ], + "spans": [ + { + "bbox": [ + 238, + 534, + 285, + 542 + ], + "type": "text", + "content": "The value of tan(angle ADC) is " + }, + { + "bbox": [ + 238, + 534, + 285, + 542 + ], + "type": "inline_equation", + "content": "\\frac{\\sqrt{3}}{3}" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 238, + 546, + 275, + 551 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 546, + 275, + 551 + ], + "spans": [ + { + "bbox": [ + 238, + 546, + 275, + 551 + ], + "type": "text", + "content": "Step-by-Step Explanation:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 238, + 554, + 280, + 559 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 554, + 280, + 559 + ], + "spans": [ + { + "bbox": [ + 238, + 554, + 280, + 559 + ], + "type": "text", + "content": "1. Understanding the Problem:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 238, + 562, + 337, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 562, + 337, + 578 + ], + "spans": [ + { + "bbox": [ + 238, + 562, + 337, + 578 + ], + "type": "text", + "content": "- We have a semicircle with diameter AB and center O.\n- Points C and D are on the semicircle, with D being the midpoint of arc BC.\n- Chords AC and BD intersect at point P, forming an angle DPB of 60 degrees.\n- We need to find tan " + }, + { + "bbox": [ + 238, + 562, + 337, + 578 + ], + "type": "inline_equation", + "content": "(\\angle ACD)" + }, + { + "bbox": [ + 238, + 562, + 337, + 578 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 238, + 582, + 369, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 582, + 369, + 594 + ], + "spans": [ + { + "bbox": [ + 238, + 582, + 369, + 594 + ], + "type": "text", + "content": "2. Key Properties and Theorems: - Intersecting Chords Theorem: The measure of an angle formed by two intersecting chords is half the intersection area." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 238, + 594, + 362, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 594, + 362, + 601 + ], + "spans": [ + { + "bbox": [ + 238, + 594, + 362, + 601 + ], + "type": "text", + "content": "- Midpoint of Arc: Since D is the midpoint of arc BC, arcs BD and DC are equal. \n- Inscribed Angle Theorem: An angle inscribed in a circle is half the measure of its intercepted arc." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 238, + 605, + 318, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 605, + 318, + 636 + ], + "spans": [ + { + "bbox": [ + 238, + 605, + 318, + 636 + ], + "type": "text", + "content": "3. Applying the Intersecting Chords Theorem: \n- Angle DPB is formed by chords BD and AC, so \\(\\angle DPB = \\frac{1}{2}\\angle (AB + arc AC)\\) \n- Given \\(\\angle DPB = 60^{\\circ}\\), we have \n\\[ 60^{\\circ} = \\frac{1}{2} (\\arccos DB + \\arccos AC) \\] \n\\(\\arccos DB + \\arccos AC = 120^{\\circ} \\]" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 238, + 640, + 336, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 640, + 336, + 673 + ], + "spans": [ + { + "bbox": [ + 238, + 640, + 336, + 673 + ], + "type": "text", + "content": "4. Determining Are Measures: Since D is the midpoint of arc BC, arcs BD and DC are equal. Let each be x. Thus, are BC = 2x. The total semicircle is " + }, + { + "bbox": [ + 238, + 640, + 336, + 673 + ], + "type": "inline_equation", + "content": "180^{\\circ}" + }, + { + "bbox": [ + 238, + 640, + 336, + 673 + ], + "type": "text", + "content": ", so: arc AC = 180° - arc BC = 180° - 2x. Substituting into the earlier equation: arc AC = 180° - arc BC = 180° - 2x. Therefore, arcs BD and DC are each 60° and arc AC is 60°." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 238, + 677, + 309, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 677, + 309, + 696 + ], + "spans": [ + { + "bbox": [ + 238, + 677, + 309, + 696 + ], + "type": "text", + "content": "5. Finding Angle ADC: \n- Angle ADC is an inscribed angle intercepting arc AC. \n- By the Inscribed Angle Theorem: \n" + }, + { + "bbox": [ + 238, + 677, + 309, + 696 + ], + "type": "inline_equation", + "content": "\\angle ADC = \\frac{\\pi}{2} \\times \\text{arc AC} = \\frac{1}{2} \\times 60^{\\circ} = 30^{\\circ}" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "spans": [ + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "text", + "content": "6. Calculating the Tangent " + }, + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "inline_equation", + "content": "\\tan (30^{\\circ}) = \\frac{\\sqrt{3}}{4}" + }, + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "text", + "content": " Answer " + }, + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "inline_equation", + "content": "\\frac{\\sqrt{3}}{4}" + }, + { + "bbox": [ + 238, + 700, + 274, + 723 + ], + "type": "text", + "content": " (Option C)" + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 279, + 105, + 326, + 117 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 105, + 326, + 117 + ], + "spans": [ + { + "bbox": [ + 279, + 105, + 326, + 117 + ], + "type": "text", + "content": "Synthesis" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 191, + 146, + 230, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 191, + 146, + 230, + 156 + ], + "spans": [ + { + "bbox": [ + 191, + 146, + 230, + 156 + ], + "type": "text", + "content": "Input Image" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 265, + 122, + 336, + 177 + ], + "blocks": [ + { + "bbox": [ + 265, + 122, + 336, + 177 + ], + "lines": [ + { + "bbox": [ + 265, + 122, + 336, + 177 + ], + "spans": [ + { + "bbox": [ + 265, + 122, + 336, + 177 + ], + "type": "image", + "image_path": "a64478cfc978bb899db5a954ad49cef18edbb6ecc305169a7883515a6c0c57af.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 145, + 694, + 463, + 708 + ], + "lines": [ + { + "bbox": [ + 145, + 694, + 463, + 708 + ], + "spans": [ + { + "bbox": [ + 145, + 694, + 463, + 708 + ], + "type": "text", + "content": "Figure 15: A VL-Thinking sample from Math PUMA (subset Synthesis)." + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 187, + 185, + 291, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 185, + 291, + 193 + ], + "spans": [ + { + "bbox": [ + 187, + 185, + 291, + 193 + ], + "type": "text", + "content": "- Question: What is the function's equation?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "spans": [ + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": "- Original Answer: Step 1: Substitute the point " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "(\\frac{\\pi}{10},5)" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " into the function: " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "5 = a\\cdot \\sin (-5\\cdot \\frac{\\pi}{10})" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " . Step 2: Simplify the equation: " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "5 = a\\cdot \\sin (\\frac{-\\pi}{2})" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " . Step 3: Since " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "\\sin (\\frac{-\\pi}{2}) = -1" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " , we have: " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "5 = -a" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " . Step 4: Solve for " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "a:a = -5" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " Step 5: Substitute the value of a back into the original function: " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "y = -5\\cdot \\sin (-5\\cdot x)" + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "text", + "content": " . Answer: " + }, + { + "bbox": [ + 187, + 193, + 429, + 234 + ], + "type": "inline_equation", + "content": "y = -5\\cdot \\sin (-5\\cdot x)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 192, + 241, + 239, + 250 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 192, + 241, + 239, + 250 + ], + "spans": [ + { + "bbox": [ + 192, + 241, + 239, + 250 + ], + "type": "text", + "content": "VL-Thinking" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "spans": [ + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "type": "text", + "content": " First, I recognize that the function given is " + }, + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "type": "inline_equation", + "content": "y = a\\cdot \\sin (-5x)" + }, + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "type": "text", + "content": " , and it passes through the point " + }, + { + "bbox": [ + 189, + 255, + 399, + 277 + ], + "type": "inline_equation", + "content": "\\left(\\frac{\\pi}{10},5\\right)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 190, + 277, + 358, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 277, + 358, + 285 + ], + "spans": [ + { + "bbox": [ + 190, + 277, + 358, + 285 + ], + "type": "text", + "content": "To find the value of " + }, + { + "bbox": [ + 190, + 277, + 358, + 285 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 190, + 277, + 358, + 285 + ], + "type": "text", + "content": ", I'll substitute the coordinates of the point into the equation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "spans": [ + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "text", + "content": "Substituting " + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "inline_equation", + "content": "x = \\frac{\\pi}{10}" + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "inline_equation", + "content": "y = 5" + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "text", + "content": ", the equation becomes: " + }, + { + "bbox": [ + 190, + 290, + 304, + 312 + ], + "type": "inline_equation", + "content": "5 = a \\cdot \\sin \\left(-5 \\cdot \\frac{\\pi}{10}\\right)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 190, + 319, + 286, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 319, + 286, + 327 + ], + "spans": [ + { + "bbox": [ + 190, + 319, + 286, + 327 + ], + "type": "text", + "content": "Simplifying the argument of the sine function:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 190, + 327, + 224, + 338 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 327, + 224, + 338 + ], + "spans": [ + { + "bbox": [ + 190, + 327, + 224, + 338 + ], + "type": "interline_equation", + "content": "- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}", + "image_path": "8fd549e979e11ed75a0743f4e7e9932ff3105f22c1a70ae7985141d2a4fca457.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 190, + 346, + 241, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 346, + 241, + 352 + ], + "spans": [ + { + "bbox": [ + 190, + 346, + 241, + 352 + ], + "type": "text", + "content": "So, the equation now is:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 190, + 354, + 229, + 365 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 354, + 229, + 365 + ], + "spans": [ + { + "bbox": [ + 190, + 354, + 229, + 365 + ], + "type": "interline_equation", + "content": "5 = a \\cdot \\sin \\left(\\frac {\\pi}{2}\\right)", + "image_path": "a82b4589b35ec28c0cd172ec5f2c60dd54325fe2f1ac6d0a605b6a3550ea17dc.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "spans": [ + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "type": "text", + "content": "I know that " + }, + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "type": "inline_equation", + "content": "\\sin \\left(-\\frac{\\pi}{2}\\right) = -1" + }, + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "type": "text", + "content": " so substituting this value in: " + }, + { + "bbox": [ + 190, + 373, + 310, + 388 + ], + "type": "inline_equation", + "content": "5 = a\\cdot (-1)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 190, + 395, + 219, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 395, + 219, + 401 + ], + "spans": [ + { + "bbox": [ + 190, + 395, + 219, + 401 + ], + "type": "text", + "content": "Solving for " + }, + { + "bbox": [ + 190, + 395, + 219, + 401 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 190, + 395, + 219, + 401 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 190, + 402, + 217, + 408 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 402, + 217, + 408 + ], + "spans": [ + { + "bbox": [ + 190, + 402, + 217, + 408 + ], + "type": "interline_equation", + "content": "5 = a \\cdot (- 1)", + "image_path": "0f4e09494c318c7359cf1332a20efdadb11939e69093d589d0bbff0f4cfe23dd.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 190, + 416, + 326, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 416, + 326, + 423 + ], + "spans": [ + { + "bbox": [ + 190, + 416, + 326, + 423 + ], + "type": "text", + "content": "Finally, substituting " + }, + { + "bbox": [ + 190, + 416, + 326, + 423 + ], + "type": "inline_equation", + "content": "a = -5" + }, + { + "bbox": [ + 190, + 416, + 326, + 423 + ], + "type": "text", + "content": " back into the original function gives:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 190, + 423, + 230, + 429 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 423, + 230, + 429 + ], + "spans": [ + { + "bbox": [ + 190, + 423, + 230, + 429 + ], + "type": "interline_equation", + "content": "y = - 5 \\cdot \\sin (- 5 x)", + "image_path": "865fe11a6d170b6bdc7b87a78b3772ab11eaf74bd327c719dd571b6e291090a2.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 190, + 430, + 210, + 436 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 430, + 210, + 436 + ], + "spans": [ + { + "bbox": [ + 190, + 430, + 210, + 436 + ], + "type": "interline_equation", + "content": "< / t h i n k >", + "image_path": "5503476800119a465e6e9370d9dd1e8bbc73b614402e2743dda0bbac66d59b33.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "spans": [ + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "text", + "content": "To determine the equation of the function " + }, + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "inline_equation", + "content": "y = a \\cdot \\sin(-5x)" + }, + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "text", + "content": " that passes through the point " + }, + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "inline_equation", + "content": "\\left( \\frac{\\pi}{10}, 5 \\right)" + }, + { + "bbox": [ + 190, + 442, + 416, + 456 + ], + "type": "text", + "content": ", follow these steps:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 190, + 464, + 280, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 464, + 280, + 470 + ], + "spans": [ + { + "bbox": [ + 190, + 464, + 280, + 470 + ], + "type": "text", + "content": "1. Substitute the Point into the Equation:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "spans": [ + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "text", + "content": "Plug " + }, + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "inline_equation", + "content": "x = \\frac{\\pi}{10}" + }, + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "inline_equation", + "content": "y = 5" + }, + { + "bbox": [ + 194, + 470, + 281, + 478 + ], + "type": "text", + "content": " into the equation:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 194, + 479, + 242, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 479, + 242, + 491 + ], + "spans": [ + { + "bbox": [ + 194, + 479, + 242, + 491 + ], + "type": "interline_equation", + "content": "5 = a \\cdot \\sin \\left(- 5 \\cdot \\frac {\\pi}{1 0}\\right)", + "image_path": "dd59ae7b4185026c2c834d34c2cf03718420107fd48be491a7e1210a462f3cb7.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 190, + 499, + 294, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 499, + 294, + 506 + ], + "spans": [ + { + "bbox": [ + 190, + 499, + 294, + 506 + ], + "type": "text", + "content": "2. Simplify the Argument of the Sine Function:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 194, + 506, + 228, + 517 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 506, + 228, + 517 + ], + "spans": [ + { + "bbox": [ + 194, + 506, + 228, + 517 + ], + "type": "interline_equation", + "content": "- 5 \\cdot \\frac {\\pi}{1 0} = - \\frac {\\pi}{2}", + "image_path": "3608349c31ea56b6157a1e78973700e1497cf8d0f0aedbd6f16e5b2f03790f07.jpg" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 194, + 517, + 249, + 524 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 517, + 249, + 524 + ], + "spans": [ + { + "bbox": [ + 194, + 517, + 249, + 524 + ], + "type": "text", + "content": "So, the equation becomes:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 194, + 525, + 233, + 538 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 525, + 233, + 538 + ], + "spans": [ + { + "bbox": [ + 194, + 525, + 233, + 538 + ], + "type": "interline_equation", + "content": "5 = a \\cdot \\sin \\left(- \\frac {\\pi}{2}\\right)", + "image_path": "7f4781094f3dc898eb5b70b1c22e810b63b3bd3e9ed406195b5fbd4f8a682819.jpg" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 190, + 546, + 257, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 546, + 257, + 552 + ], + "spans": [ + { + "bbox": [ + 190, + 546, + 257, + 552 + ], + "type": "text", + "content": "3. Evaluate the Sine Function:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 194, + 554, + 231, + 567 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 554, + 231, + 567 + ], + "spans": [ + { + "bbox": [ + 194, + 554, + 231, + 567 + ], + "type": "interline_equation", + "content": "\\sin \\left(- \\frac {\\pi}{2}\\right) = - 1", + "image_path": "f30db0d64e4d6ced1aa0a277fc0c8af60ae67502949f09a48fdfe89e2eef9340.jpg" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 194, + 568, + 276, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 568, + 276, + 574 + ], + "spans": [ + { + "bbox": [ + 194, + 568, + 276, + 574 + ], + "type": "text", + "content": "Substituting this back into the equation:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 194, + 575, + 220, + 582 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 575, + 220, + 582 + ], + "spans": [ + { + "bbox": [ + 194, + 575, + 220, + 582 + ], + "type": "interline_equation", + "content": "5 = a \\cdot (- 1)", + "image_path": "c4de4059c8484b4b947ff0b61d8e8130da558a160cdccf18e7c38e18ad35751b.jpg" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 190, + 589, + 221, + 595 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 589, + 221, + 595 + ], + "spans": [ + { + "bbox": [ + 190, + 589, + 221, + 595 + ], + "type": "text", + "content": "4. Solve for " + }, + { + "bbox": [ + 190, + 589, + 221, + 595 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 190, + 589, + 221, + 595 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 194, + 597, + 212, + 601 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 597, + 212, + 601 + ], + "spans": [ + { + "bbox": [ + 194, + 597, + 212, + 601 + ], + "type": "interline_equation", + "content": "a = - 5", + "image_path": "cca602ec43b18134f3175daecd42d1ae16cc43133bac303a8f4fa0ebe55ecdc5.jpg" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 190, + 609, + 253, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 609, + 253, + 616 + ], + "spans": [ + { + "bbox": [ + 190, + 609, + 253, + 616 + ], + "type": "text", + "content": "5. Write the Final Equation:" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 194, + 616, + 299, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 616, + 299, + 623 + ], + "spans": [ + { + "bbox": [ + 194, + 616, + 299, + 623 + ], + "type": "text", + "content": "Substitute " + }, + { + "bbox": [ + 194, + 616, + 299, + 623 + ], + "type": "inline_equation", + "content": "a = -5" + }, + { + "bbox": [ + 194, + 616, + 299, + 623 + ], + "type": "text", + "content": " back into the original function:" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 194, + 624, + 234, + 630 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 624, + 234, + 630 + ], + "spans": [ + { + "bbox": [ + 194, + 624, + 234, + 630 + ], + "type": "interline_equation", + "content": "y = - 5 \\cdot \\sin (- 5 x)", + "image_path": "fbcd3c1d1d768a476f90da661ac38e3a5c8b1c548ddc87d42e4ac32f3fdba4d1.jpg" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 190, + 637, + 280, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 637, + 280, + 643 + ], + "spans": [ + { + "bbox": [ + 190, + 637, + 280, + 643 + ], + "type": "text", + "content": "6. Simplify the Negative Sign (Optional):" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 194, + 644, + 323, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 644, + 323, + 651 + ], + "spans": [ + { + "bbox": [ + 194, + 644, + 323, + 651 + ], + "type": "text", + "content": "Since " + }, + { + "bbox": [ + 194, + 644, + 323, + 651 + ], + "type": "inline_equation", + "content": "\\sin (-\\theta) = -\\sin (\\theta)" + }, + { + "bbox": [ + 194, + 644, + 323, + 651 + ], + "type": "text", + "content": ", the equation can also be written as:" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 194, + 651, + 226, + 657 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 651, + 226, + 657 + ], + "spans": [ + { + "bbox": [ + 194, + 651, + 226, + 657 + ], + "type": "interline_equation", + "content": "y = 5 \\cdot \\sin (5 x)", + "image_path": "e8530a9c6da5af3e988a414742cd8587bc85f843c30bd1bceca356014eb617c9.jpg" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 190, + 665, + 222, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 665, + 222, + 670 + ], + "spans": [ + { + "bbox": [ + 190, + 665, + 222, + 670 + ], + "type": "text", + "content": "Final Answer:" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 190, + 671, + 228, + 678 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 671, + 228, + 678 + ], + "spans": [ + { + "bbox": [ + 190, + 671, + 228, + 678 + ], + "type": "interline_equation", + "content": "y = - 5 \\sin (- 5 x)", + "image_path": "f2d83561694c053e82b4b21f084a0f4b302530cd97be792b59857fb009114238.jpg" + } + ] + } + ], + "index": 41 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 43 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 260, + 140, + 367, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 260, + 140, + 367, + 156 + ], + "spans": [ + { + "bbox": [ + 260, + 140, + 367, + 156 + ], + "type": "text", + "content": "CLEVR-MATH" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 124, + 212, + 188, + 225 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 212, + 188, + 225 + ], + "spans": [ + { + "bbox": [ + 124, + 212, + 188, + 225 + ], + "type": "text", + "content": "Input Image" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 249, + 169, + 391, + 265 + ], + "blocks": [ + { + "bbox": [ + 249, + 169, + 391, + 265 + ], + "lines": [ + { + "bbox": [ + 249, + 169, + 391, + 265 + ], + "spans": [ + { + "bbox": [ + 249, + 169, + 391, + 265 + ], + "type": "image", + "image_path": "340653a937dc0ead4560d11a4ab4f1710147c29cbf57843a3411075515c269a9.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 115, + 278, + 346, + 312 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 115, + 278, + 346, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 278, + 346, + 292 + ], + "spans": [ + { + "bbox": [ + 115, + 278, + 346, + 292 + ], + "type": "text", + "content": "- Question: Subtract all balls. How many objects are left?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 115, + 300, + 205, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 300, + 205, + 312 + ], + "spans": [ + { + "bbox": [ + 115, + 300, + 205, + 312 + ], + "type": "text", + "content": "- Original Answer: 3" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 124, + 326, + 205, + 342 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 326, + 205, + 342 + ], + "spans": [ + { + "bbox": [ + 124, + 326, + 205, + 342 + ], + "type": "text", + "content": "VL-Thinking" + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 304, + 345, + 323, + 363 + ], + "blocks": [ + { + "bbox": [ + 304, + 345, + 323, + 363 + ], + "lines": [ + { + "bbox": [ + 304, + 345, + 323, + 363 + ], + "spans": [ + { + "bbox": [ + 304, + 345, + 323, + 363 + ], + "type": "image", + "image_path": "99b557fcb93935764a42d2dfd1acfee3d1750c7453441b571a838cd1874752cb.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 126, + 364, + 500, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 364, + 500, + 410 + ], + "spans": [ + { + "bbox": [ + 126, + 364, + 500, + 410 + ], + "type": "text", + "content": " Okay, let's see. The user is asking to subtract all balls and determine how many objects are left. First, I need to figure out which objects are considered \"balls\" in the image. The image shows a large blue metal sphere. A sphere is a three-dimensional circle, which is essentially a ball shape. So the blue metal sphere is definitely a ball." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 126, + 422, + 503, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 422, + 503, + 467 + ], + "spans": [ + { + "bbox": [ + 126, + 422, + 503, + 467 + ], + "type": "text", + "content": "Next, the surrounding objects are two small red metal cylinders and a single small brown metal cylinder in the foreground. Cylinders are not balls; they have a different shape—circular bases and straight sides. So those would not be subtracted. The question says to subtract all balls, so only the sphere (ball) needs to be removed." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 126, + 479, + 496, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 479, + 496, + 514 + ], + "spans": [ + { + "bbox": [ + 126, + 479, + 496, + 514 + ], + "type": "text", + "content": "Originally, there's the sphere (1 object), two red cylinders (2 objects), and one brown cylinder (1 object). Total objects before subtraction: " + }, + { + "bbox": [ + 126, + 479, + 496, + 514 + ], + "type": "inline_equation", + "content": "1 + 2 + 1 = 4" + }, + { + "bbox": [ + 126, + 479, + 496, + 514 + ], + "type": "text", + "content": ". Subtracting the one ball leaves 3 objects: the two red cylinders and the brown cylinder." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 126, + 526, + 495, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 526, + 495, + 559 + ], + "spans": [ + { + "bbox": [ + 126, + 526, + 495, + 559 + ], + "type": "text", + "content": "Wait, but I should make sure there are no other balls mentioned. The image only refers to the sphere as a ball-like object. The cylinders are clearly not balls. So subtracting the sphere leaves three cylinders. The answer should be 3." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 127, + 562, + 160, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 562, + 160, + 571 + ], + "spans": [ + { + "bbox": [ + 127, + 562, + 160, + 571 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 126, + 584, + 509, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 584, + 509, + 619 + ], + "spans": [ + { + "bbox": [ + 126, + 584, + 509, + 619 + ], + "type": "text", + "content": "In the scene, there is one large blue sphere (a ball-shaped object) centrally positioned. The remaining objects consist of two small red cylinders flanking it symmetrically and a single brown cylinder in the foreground. Removing the sphere leaves only the three cylindrical objects visible." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 126, + 631, + 269, + 642 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 631, + 269, + 642 + ], + "spans": [ + { + "bbox": [ + 126, + 631, + 269, + 642 + ], + "type": "text", + "content": "Answer: There are 3 objects remaining." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 184, + 662, + 424, + 675 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 662, + 424, + 675 + ], + "spans": [ + { + "bbox": [ + 184, + 662, + 424, + 675 + ], + "type": "text", + "content": "Figure 16: A VL-Thinking sample from CLEVR-Math." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 280, + 106, + 350, + 124 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 106, + 350, + 124 + ], + "spans": [ + { + "bbox": [ + 280, + 106, + 350, + 124 + ], + "type": "text", + "content": "ArxivQA" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 123, + 178, + 188, + 193 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 178, + 188, + 193 + ], + "spans": [ + { + "bbox": [ + 123, + 178, + 188, + 193 + ], + "type": "text", + "content": "Input Image" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 237, + 132, + 309, + 182 + ], + "blocks": [ + { + "bbox": [ + 237, + 132, + 309, + 182 + ], + "lines": [ + { + "bbox": [ + 237, + 132, + 309, + 182 + ], + "spans": [ + { + "bbox": [ + 237, + 132, + 309, + 182 + ], + "type": "image", + "image_path": "ac27de6313219f66011f604cca1fbb37b6ebe2995f7f8f762b75a65cfd3adbd7.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 194, + 694, + 415, + 708 + ], + "lines": [ + { + "bbox": [ + 194, + 694, + 415, + 708 + ], + "spans": [ + { + "bbox": [ + 194, + 694, + 415, + 708 + ], + "type": "text", + "content": "Figure 17: A VL-Thinking sample from ArxivQA." + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 310, + 133, + 386, + 184 + ], + "blocks": [ + { + "bbox": [ + 310, + 133, + 386, + 184 + ], + "lines": [ + { + "bbox": [ + 310, + 133, + 386, + 184 + ], + "spans": [ + { + "bbox": [ + 310, + 133, + 386, + 184 + ], + "type": "image", + "image_path": "a080b9abdd9635b279c2f9958d9e719318132e3687db69f648d63d71dfbedcce.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 237, + 182, + 309, + 236 + ], + "blocks": [ + { + "bbox": [ + 237, + 182, + 309, + 236 + ], + "lines": [ + { + "bbox": [ + 237, + 182, + 309, + 236 + ], + "spans": [ + { + "bbox": [ + 237, + 182, + 309, + 236 + ], + "type": "image", + "image_path": "7fc5a1a9301c5fb6736f949201bff0949fb47c17a570ba2e917f554066c12df9.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 310, + 183, + 386, + 235 + ], + "blocks": [ + { + "bbox": [ + 310, + 183, + 386, + 235 + ], + "lines": [ + { + "bbox": [ + 310, + 183, + 386, + 235 + ], + "spans": [ + { + "bbox": [ + 310, + 183, + 386, + 235 + ], + "type": "image", + "image_path": "9e55ad6c0189d171fe14cb8f0afa04c5b85841a7ad13a105b4a7d1bd364279e7.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 113, + 245, + 512, + 327 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 113, + 245, + 512, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 245, + 512, + 309 + ], + "spans": [ + { + "bbox": [ + 113, + 245, + 512, + 309 + ], + "type": "text", + "content": "- Question: In figure (b), how does the repetition rate affect the ratio of 9th to 11th harmonic intensity for the " + }, + { + "bbox": [ + 113, + 245, + 512, + 309 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 113, + 245, + 512, + 309 + ], + "type": "text", + "content": " wavelength? Choose from the following options A) The ratio increases as the repetition rate increases. B) The ratio decreases as the repetition rate increases. C) The ratio remains constant as the repetition rate increases. D) The ratio fluctuates unpredictably with changes in repetition rate." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 115, + 316, + 207, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 316, + 207, + 327 + ], + "spans": [ + { + "bbox": [ + 115, + 316, + 207, + 327 + ], + "type": "text", + "content": "- Original Answer: B" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 123, + 342, + 205, + 358 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 342, + 205, + 358 + ], + "spans": [ + { + "bbox": [ + 123, + 342, + 205, + 358 + ], + "type": "text", + "content": "VL-Thinking" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 126, + 369, + 157, + 378 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 369, + 157, + 378 + ], + "spans": [ + { + "bbox": [ + 126, + 369, + 157, + 378 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 306, + 361, + 324, + 380 + ], + "blocks": [ + { + "bbox": [ + 306, + 361, + 324, + 380 + ], + "lines": [ + { + "bbox": [ + 306, + 361, + 324, + 380 + ], + "spans": [ + { + "bbox": [ + 306, + 361, + 324, + 380 + ], + "type": "image", + "image_path": "dfb1811e4962ba180b1a579130806e4d2ebb33f66e4e12967512ea3d7b4a7610.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "bbox": [ + 125, + 380, + 514, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 380, + 514, + 403 + ], + "spans": [ + { + "bbox": [ + 125, + 380, + 514, + 403 + ], + "type": "text", + "content": "Okay, so I'm looking at figure (b) here, and I need to figure out how the repetition rate affects the ratio of 9th to 11th harmonic intensity for the " + }, + { + "bbox": [ + 125, + 380, + 514, + 403 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 125, + 380, + 514, + 403 + ], + "type": "text", + "content": " wavelength. Let me start by understanding what's in the image." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "spans": [ + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "text", + "content": "First, the image is a scatter plot with the x-axis being the repetition rate in kHz and the y-axis being the ratio of 9th to 7th and 11th harmonic intensity on a logarithmic scale. There are two sets of data points: red dots and blue squares. The red dots are probably representing the " + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "text", + "content": " wavelength since in panel (a), the red curve was labeled as " + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "text", + "content": ". The blue squares would then correspond to the " + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "inline_equation", + "content": "515\\mathrm{nm}" + }, + { + "bbox": [ + 124, + 415, + 512, + 462 + ], + "type": "text", + "content": " wavelength." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 124, + 474, + 510, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 474, + 510, + 509 + ], + "spans": [ + { + "bbox": [ + 124, + 474, + 510, + 509 + ], + "type": "text", + "content": "The question specifically asks about the " + }, + { + "bbox": [ + 124, + 474, + 510, + 509 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 124, + 474, + 510, + 509 + ], + "type": "text", + "content": " wavelength, so I'll focus on the red dots. Looking at the scatter plot, as the repetition rate increases from left to right on the x-axis, I need to see what happens to the y-axis values for the red dots." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 124, + 521, + 516, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 521, + 516, + 568 + ], + "spans": [ + { + "bbox": [ + 124, + 521, + 516, + 568 + ], + "type": "text", + "content": "From what I can see, as the repetition rate goes up, the red dots trend downward. That means the ratio of 9th to 11th harmonic intensity is decreasing. There's no indication of an increase or a constant ratio; instead, it's a clear downward trend. The red dots don't seem to fluctuate unpredictably either; they follow a decreasing pattern as repetition rate increases." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 125, + 580, + 499, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 580, + 499, + 603 + ], + "spans": [ + { + "bbox": [ + 125, + 580, + 499, + 603 + ], + "type": "text", + "content": "So, putting it all together, the ratio decreases as the repetition rate increases for the " + }, + { + "bbox": [ + 125, + 580, + 499, + 603 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 125, + 580, + 499, + 603 + ], + "type": "text", + "content": " wavelength. That would correspond to option B." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 126, + 604, + 160, + 614 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 604, + 160, + 614 + ], + "spans": [ + { + "bbox": [ + 126, + 604, + 160, + 614 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 124, + 627, + 504, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 627, + 504, + 651 + ], + "spans": [ + { + "bbox": [ + 124, + 627, + 504, + 651 + ], + "type": "text", + "content": "The image shows that the ratio of 9th to 11th harmonic intensity for the " + }, + { + "bbox": [ + 124, + 627, + 504, + 651 + ], + "type": "inline_equation", + "content": "1030\\mathrm{nm}" + }, + { + "bbox": [ + 124, + 627, + 504, + 651 + ], + "type": "text", + "content": " wavelength decreases as the repetition rate increases." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 125, + 662, + 366, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 662, + 366, + 673 + ], + "spans": [ + { + "bbox": [ + 125, + 662, + 366, + 673 + ], + "type": "text", + "content": "Answer: B) The ratio decreases as the repetition rate increases." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 244, + 105, + 354, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 105, + 354, + 121 + ], + "spans": [ + { + "bbox": [ + 244, + 105, + 354, + 121 + ], + "type": "text", + "content": "ALLaVA- LAION" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 124, + 171, + 183, + 184 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 171, + 183, + 184 + ], + "spans": [ + { + "bbox": [ + 124, + 171, + 183, + 184 + ], + "type": "text", + "content": "Input Image" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 228, + 130, + 369, + 222 + ], + "blocks": [ + { + "bbox": [ + 228, + 130, + 369, + 222 + ], + "lines": [ + { + "bbox": [ + 228, + 130, + 369, + 222 + ], + "spans": [ + { + "bbox": [ + 228, + 130, + 369, + 222 + ], + "type": "image", + "image_path": "88060b6adab1bfd2500dd401ffcb4c030d7316e98a9a3c6dcf73ab1d950d173b.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 177, + 694, + 432, + 708 + ], + "lines": [ + { + "bbox": [ + 177, + 694, + 432, + 708 + ], + "spans": [ + { + "bbox": [ + 177, + 694, + 432, + 708 + ], + "type": "text", + "content": "Figure 18: A VL-Thinking sample from ALLaVA-LAION." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 116, + 232, + 497, + 334 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 116, + 232, + 455, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 232, + 455, + 255 + ], + "spans": [ + { + "bbox": [ + 116, + 232, + 455, + 255 + ], + "type": "text", + "content": "- Question: Considering the proximity of Preston Store to other landmarks, what could be its significance in the local area?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 116, + 259, + 497, + 334 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 259, + 497, + 334 + ], + "spans": [ + { + "bbox": [ + 116, + 259, + 497, + 334 + ], + "type": "text", + "content": "- Original Answer: The Preston Store's significance in the local area, as suggested by the map, could be quite central given its prominent placement and the fact that it has a dedicated Facebook page marker. It appears to be situated near major roads and commercial entities like KFC, Harvey Norman, and Dan Murphy's, which indicates it is likely a well-known establishment, possibly a retail store or a social venue, that serves as a local landmark or gathering place. The specific type of store cannot be determined from the map alone, but its central location among various retailers and close to a park suggests it may cater to a wide range of customers and possibly holds a key place in the local community." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 124, + 350, + 198, + 365 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 350, + 198, + 365 + ], + "spans": [ + { + "bbox": [ + 124, + 350, + 198, + 365 + ], + "type": "text", + "content": "VL-Thinking" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 375, + 159, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 375, + 159, + 384 + ], + "spans": [ + { + "bbox": [ + 130, + 375, + 159, + 384 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 290, + 369, + 308, + 385 + ], + "blocks": [ + { + "bbox": [ + 290, + 369, + 308, + 385 + ], + "lines": [ + { + "bbox": [ + 290, + 369, + 308, + 385 + ], + "spans": [ + { + "bbox": [ + 290, + 369, + 308, + 385 + ], + "type": "image", + "image_path": "e394db943aede4eb0b172aeb2ddd198d9ca1fc5f239d8884dd1b66af83aecad0.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 129, + 386, + 460, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 386, + 460, + 418 + ], + "spans": [ + { + "bbox": [ + 129, + 386, + 460, + 418 + ], + "type": "text", + "content": "Okay, let's see. The user is asking about the significance of Preston Store in the local area based on its proximity to other landmarks. First, I need to visualize the map shown. The Preston Store is labeled prominently with a red marker and a Facebook 'f' logo, so it's probably a key location." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 129, + 429, + 475, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 429, + 475, + 472 + ], + "spans": [ + { + "bbox": [ + 129, + 429, + 475, + 472 + ], + "type": "text", + "content": "Looking at the nearby landmarks, there's KFC, Harvey Norman, Dan Murphy's, and Max Kirwan Mazda. Those are all big-name retailers and services. Then there's Target, Myer Northland, and Anaconda Preston. These are major stores, which suggests the area is a commercial hub. The store being near these places means it's likely in a busy shopping district, making it a central spot for locals." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 129, + 482, + 471, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 482, + 471, + 556 + ], + "spans": [ + { + "bbox": [ + 129, + 482, + 471, + 556 + ], + "type": "text", + "content": "Also, there's TW Blake Park and Darebin Creek Forest Park nearby. Green areas mean that Preston Store might be a place people visit before or after outdoor activities. Maybe it's a convenience store or a community spot where people gather. The mix of retail, dining, and parks indicates it's a multifunctional area, so Preston Store could serve both daily shopping needs and act as a meeting point. The Facebook marker suggests they have an online presence, maybe promoting local events or services, enhancing community engagement. Since it's near major roads like Bell St and Murray Rd, accessibility is good, increasing its importance as a central landmark." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 558, + 161, + 566 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 558, + 161, + 566 + ], + "spans": [ + { + "bbox": [ + 130, + 558, + 161, + 566 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 129, + 578, + 483, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 578, + 483, + 674 + ], + "spans": [ + { + "bbox": [ + 129, + 578, + 483, + 674 + ], + "type": "text", + "content": "Preston Store appears centrally positioned within a bustling commercial and recreational hub, making it a likely focal point for local activity. Its prominent placement near major retailers like Target, Myer Northland, and Harvey Norman suggests it could serve as a practical anchor for shoppers—perhaps offering essentials or services that complement these larger stores. The proximity to fast food (KFC) and leisure destinations (Darebin Creek Forest Park, TW Blake Park) positions it as a potential pitstop for families or individuals transitioning between errands, dining, and outdoor activities. The red Facebook marker implies community engagement, possibly acting as an information node or local business with social media visibility. Its accessibility via intersecting roads like Bell St and Murray Rd further underscores its role as a convenient landmark in the area's daily flow." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 212, + 38 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file