{ "pdf_info": [ { "para_blocks": [ { "bbox": [ 98, 72, 495, 106 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 98, 72, 495, 106 ], "spans": [ { "bbox": [ 98, 72, 495, 106 ], "type": "text", "content": "100-LongBench: Are de facto Long-Context Benchmarks Literally Evaluating Long-Context Ability?" } ] } ], "index": 0 }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 112, 115, 483, 142 ], "spans": [ { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": "Wang Yang" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Hongye Jin" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Shaochen Zhong" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{3}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Song Jiang" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{4}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Qifan Wang" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{5}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Vipin Chaudhary" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 112, 115, 483, 142 ], "type": "text", "content": ", Xiaotian Han" }, { "bbox": [ 112, 115, 483, 142 ], "type": "inline_equation", "content": "^{1}" } ] } ], "index": 1 }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 111, 143, 480, 171 ], "spans": [ { "bbox": [ 111, 143, 480, 171 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "content": "Case Western Reserve University " }, { "bbox": [ 111, 143, 480, 171 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "content": "Texas A&M University " }, { "bbox": [ 111, 143, 480, 171 ], "type": "inline_equation", "content": "^{3}" }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "content": "Rice University " }, { "bbox": [ 111, 143, 480, 171 ], "type": "inline_equation", "content": "^{4}" }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "content": "University of California, Los Angeles " }, { "bbox": [ 111, 143, 480, 171 ], "type": "inline_equation", "content": "^{5}" }, { "bbox": [ 111, 143, 480, 171 ], "type": "text", "content": "Meta" } ] } ], "index": 2 }, { "bbox": [ 110, 172, 477, 199 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 110, 172, 477, 199 ], "spans": [ { "bbox": [ 110, 172, 477, 199 ], "type": "text", "content": "{wxy320,vipin,xhan}@case.edu, jhy0410@amu.edu, hz88@rice.edu \nsongjiang@ucla.edu, wqfcr@meta.com" } ] } ], "index": 3 }, { "bbox": [ 155, 219, 202, 232 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 155, 219, 202, 232 ], "spans": [ { "bbox": [ 155, 219, 202, 232 ], "type": "text", "content": "Abstract" } ] } ], "index": 4 }, { "bbox": [ 86, 243, 274, 663 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 86, 243, 274, 663 ], "spans": [ { "bbox": [ 86, 243, 274, 663 ], "type": "text", "content": "Long-context capability is considered one of the most important abilities of LLMs, as a truly long context-capable LLM shall enable its users to effortlessly process many originally exhausting tasks — e.g., digesting a long-form document to find answers v.s., directly asking an LLM about it. However, existing real-task-based long-context evaluation benchmarks have a few major shortcomings. For instance, some Needle-in-a-Haystack-like benchmarks are too synthetic, and therefore do not represent the real world usage of LLMs. While some real-task-based benchmarks like Long-Bench avoid this problem, such benchmarks are often formed in a way where each data sample has a fixed sequence length, which not only makes them solely suitable for models with a certain range of context windows, but also lacks a proxy to know at what length the model/method-of-interest would fail. Last, most benchmarks tend to not provide proper metrics to separate long-context performance from the model's baseline ability, so when conducting a cross-model/recipe comparison, such conflation makes the user unable to understand how exactly one model or recipe excels at the long-context task in relation to its baseline ability. To address these issues, we introduce a length-controllable, real-life reflective benchmark with a novel metric that disentangles baseline knowledge from long-context capabilities. Experiments demonstrate the superiority of our datasets in effectively evaluating LLMs. All assets are available at https://github.com/uservan/100-LongBench.git." } ] } ], "index": 5 }, { "bbox": [ 68, 685, 154, 698 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 685, 154, 698 ], "spans": [ { "bbox": [ 68, 685, 154, 698 ], "type": "text", "content": "1 Introduction" } ] } ], "index": 6 }, { "bbox": [ 67, 708, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 708, 291, 775 ], "spans": [ { "bbox": [ 67, 708, 291, 775 ], "type": "text", "content": "The long-context capability has become one of the fundamental competencies (Gao et al., 2024; Liu et al., 2024b; Li et al., 2024; Agarwal et al., 2024) of contemporary large language models (LLMs) because it takes the average human critical" } ] } ], "index": 7 }, { "bbox": [ 302, 217, 526, 337 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 217, 526, 337 ], "spans": [ { "bbox": [ 302, 217, 526, 337 ], "type": "text", "content": "Table 1: Models' ranking on Ruler (Hsieh et al., 2024) with different metrics. Base Ability represents model's score within " }, { "bbox": [ 302, 217, 526, 337 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 302, 217, 526, 337 ], "type": "text", "content": " context. Old/Proposed Metric represents the average score across various lengths using traditional metric/our proposed metric. " }, { "bbox": [ 302, 217, 526, 337 ], "type": "inline_equation", "content": "96.5_{(1)}" }, { "bbox": [ 302, 217, 526, 337 ], "type": "text", "content": " indicates a score of 96.5 with a rank of 1. More details are in Table 5. Comparing the ranking of Old Metric and Proposed Metric reveals that the rankings of the old metrics are heavily influenced by the model's inherent abilities, which might not really reflect long-context ability." } ] } ], "index": 8 }, { "type": "table", "bbox": [ 305, 338, 523, 400 ], "blocks": [ { "bbox": [ 305, 338, 523, 400 ], "lines": [ { "bbox": [ 305, 338, 523, 400 ], "spans": [ { "bbox": [ 305, 338, 523, 400 ], "type": "table", "html": "
Model (size,length)Base \nAbilityOld \nMetricProposed \nMetric
Llama3.1 (70B, 128K)96.5(1)88.2(1)-8.6(2)
Yi (34B, 200K) (Young et al., 2024)93.3(2)86.3(2)-7.5(1)
Phi3-medium (14B, 128K)93.3(3)79.1(3)-15.1(4)
LWM (7B, 1M) (Liu et al., 2024a)82.3(4)70.8(4)-13.9(3)
", "image_path": "b71cf26f456f17b6ee3be17d2075dca4a301c42821c44c6c1fb6b4d879dc0efd.jpg" } ] } ], "index": 9, "angle": 0, "type": "table_body" } ], "index": 9 }, { "bbox": [ 302, 404, 526, 539 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 404, 526, 539 ], "spans": [ { "bbox": [ 302, 404, 526, 539 ], "type": "text", "content": "time and effort to digest long-form context, making a long-context-capable LLM beyond desirable. To assess the long-context capabilities of LLMs, various evaluation benchmarks and metrics have been proposed, including LongBench (Bai et al., 2023), L-Eval (An et al., 2023), NIAH (Needle in the Haystack), RULER (Hsieh et al., 2024), AdaLEval (Wang et al., 2024) and Loogle (Li et al., 2023a). However, these benchmarks often exhibit at least one of the following three shortcomings:" } ] } ], "index": 10 }, { "bbox": [ 302, 542, 526, 691 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 542, 526, 691 ], "spans": [ { "bbox": [ 302, 542, 526, 691 ], "type": "text", "content": "(1) They rely on purely synthetically-constructed content that is not real-life reflective. Synthetic benchmarks such as NIAH or Passkey Retrieval often demand the retrieval of a source (e.g., a string of digits or a phrase) that bears no semantic or task relevance to the padding content (e.g., unrelated blog posts). This kind of highly artificial task does not properly reflect how LLMs are utilized in typical real-world settings, where information of similar nature is often joined together for a reader to understand and digest." } ] } ], "index": 11 }, { "bbox": [ 302, 694, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 694, 525, 775 ], "spans": [ { "bbox": [ 302, 694, 525, 775 ], "type": "text", "content": "(2) They adopt a fixed input length per data sample, making them suitable only for certain LLMs with compatible context windows. This is a major problem because context windows have grown significantly, thanks to the development of context extension techniques and post-training" } ] } ], "index": 12 } ], "discarded_blocks": [ { "bbox": [ 283, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 283, 780, 312, 791 ], "spans": [ { "bbox": [ 283, 780, 312, 791 ], "type": "text", "content": "17560" } ] } ], "index": 13 }, { "bbox": [ 131, 795, 463, 818 ], "type": "footer", "angle": 0, "lines": [ { "bbox": [ 131, 795, 463, 818 ], "spans": [ { "bbox": [ 131, 795, 463, 818 ], "type": "text", "content": "Findings of the Association for Computational Linguistics: ACL 2025, pages 17560-17576 July 27 - August 1, 2025 ©2025 Association for Computational Linguistics" } ] } ], "index": 14 } ], "page_size": [ 595, 841 ], "page_idx": 0 }, { "para_blocks": [ { "type": "image", "bbox": [ 71, 69, 221, 145 ], "blocks": [ { "bbox": [ 71, 69, 221, 145 ], "lines": [ { "bbox": [ 71, 69, 221, 145 ], "spans": [ { "bbox": [ 71, 69, 221, 145 ], "type": "image", "image_path": "7f32438245991c359b14b835c7669d673c0edde0f61a848744c7801155b0e058.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 147, 526, 208 ], "lines": [ { "bbox": [ 67, 147, 526, 208 ], "spans": [ { "bbox": [ 67, 147, 526, 208 ], "type": "text", "content": "Figure 1: Illustration of LM-Infinite (Han et al., 2024), a long-context enhancement method's performances on three LongBench tasks. The colored dashed lines represent the average score of each model on the corresponding task. The size of the markers corresponds to the proportion of each text length within the entire dataset. The larger the marker, the higher the proportion. The results exhibit significant variation across tasks of different lengths within the same dataset. More results of other methods are in Appendix A.1." } ] } ], "index": 3, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "type": "image", "bbox": [ 225, 69, 373, 145 ], "blocks": [ { "bbox": [ 225, 69, 373, 145 ], "lines": [ { "bbox": [ 225, 69, 373, 145 ], "spans": [ { "bbox": [ 225, 69, 373, 145 ], "type": "image", "image_path": "a85f8d3840bdeb41ba122e8fbcc5c652d78b9bc8e24a83e10fe3f5538ae75165.jpg" } ] } ], "index": 1, "angle": 0, "type": "image_body" } ], "index": 1 }, { "type": "image", "bbox": [ 376, 69, 523, 145 ], "blocks": [ { "bbox": [ 376, 69, 523, 145 ], "lines": [ { "bbox": [ 376, 69, 523, 145 ], "spans": [ { "bbox": [ 376, 69, 523, 145 ], "type": "image", "image_path": "d8074ac3aa71de5422257bf1181e83c08eba2fa2a644506ca746947a6be9c989.jpg" } ] } ], "index": 2, "angle": 0, "type": "image_body" } ], "index": 2 }, { "bbox": [ 69, 228, 291, 513 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 228, 291, 513 ], "spans": [ { "bbox": [ 69, 228, 291, 513 ], "type": "text", "content": "recipes. With Llama 3.1 (Dubey et al., 2024) claiming a context window of " }, { "bbox": [ 69, 228, 291, 513 ], "type": "inline_equation", "content": "128\\mathrm{k}" }, { "bbox": [ 69, 228, 291, 513 ], "type": "text", "content": " (in contrast to the " }, { "bbox": [ 69, 228, 291, 513 ], "type": "inline_equation", "content": "4\\mathrm{k}" }, { "bbox": [ 69, 228, 291, 513 ], "type": "text", "content": " context of Llama 2), many once \"long-context\" datasets have already become outdated. It is therefore foreseeable that many evaluations we see today will no longer be reflective as time passes. Moreover, having different lengths per individual data sample makes the evaluation reading unintuitive in several ways: E.g., for model evaluation, it is hard to tell at what length it would fail or prevail, because we only get the aggregated reading upon questions of different lengths. For method evaluation, many constant-budget compression works like StreamingLLM (Xiao et al., 2023a) and InfLLM (Xiao et al., 2024) — have an arbitrarily set constant budget that is applied to all inputs, ignoring the fact that this budget may exceed certain data samples. As a result, the reported \"compressed performance\" often turns into an unknown mixture of both compressed and uncompressed results, complicating the transparency of assessments." } ] } ], "index": 4 }, { "bbox": [ 67, 518, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 518, 291, 775 ], "spans": [ { "bbox": [ 67, 518, 291, 775 ], "type": "text", "content": "(3) They do not address the conflation between base ability and long-context capability, as these benchmarks evaluate long-context capabilities solely based on long-context tasks without isolating the influence of a model's baseline abilities. Thus, some readings can be tricky to digest when factors cannot be perfectly ablated. For instance, suppose we have two different base models, each has undergone their own continuous pretraining recipes for context extension (e.g., Llama and Qwen), which extension recipe is likely better? Applying both recipes to the same base model for direct comparison is often impractical due to compute and dataset resource limitations. Naturally, one avenue of evaluation is to measure the long context performance relative to the short context performance for an educated understanding, but such kind of measurements is largely missing in most existing long-context benchmarks." } ] } ], "index": 5 }, { "bbox": [ 302, 229, 526, 417 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 229, 526, 417 ], "spans": [ { "bbox": [ 302, 229, 526, 417 ], "type": "text", "content": "In this work, we attempt to alleviate such problems by providing a new benchmark involving a rich set of length-controllable real-life-reflective tasks — " }, { "bbox": [ 302, 229, 526, 417 ], "type": "inline_equation", "content": "100" }, { "bbox": [ 302, 229, 526, 417 ], "type": "text", "content": "-LongBench — and a new evaluation metric — LongScore — which leads to significant shifts in model rankings compared to traditional performance-based evaluations, as shown in Table 1. We first validate the reliability of the proposed " }, { "bbox": [ 302, 229, 526, 417 ], "type": "inline_equation", "content": "100" }, { "bbox": [ 302, 229, 526, 417 ], "type": "text", "content": "-LongBench and the effectiveness of LongScore. We then comprehensively benchmark various open-source models, providing fresh insights into long-context evaluation and offering a more accurate assessment that better reflects models' true abilities to handle extended contexts." } ] } ], "index": 6 }, { "bbox": [ 302, 428, 515, 455 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 428, 515, 455 ], "spans": [ { "bbox": [ 302, 428, 515, 455 ], "type": "text", "content": "2 Motivation: why do we need to refine long-context benchmarks?" } ] } ], "index": 7 }, { "bbox": [ 302, 464, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 464, 526, 775 ], "spans": [ { "bbox": [ 302, 464, 526, 775 ], "type": "text", "content": "Performance variance across task lengths Evidenced by Figure 1, the performance of LM-Infinite exhibits significant variation across tasks of different lengths within the same dataset. Many longcontext datasets have uneven length distributions, introducing biases in evaluating a model's longcontext capability. To validate this hypothesis, we train models using five different long-context enhancement methods and evaluate their performances across varying lengths on the LongBench dataset. From Figure 1, we observe the following: (1) Performance Variation: All five models demonstrate performance differences across different text lengths within the same dataset. (2) Alignment with Dominant Lengths: A model's average performance aligns closely with its performance on the length range with the highest proportion of samples. For instance, on Multi-News dataset, each model's average performance is close to its performance on samples within the " }, { "bbox": [ 302, 464, 526, 775 ], "type": "inline_equation", "content": "0 - 4\\mathrm{k}" }, { "bbox": [ 302, 464, 526, 775 ], "type": "text", "content": " length range, which represents the largest share of the dataset. These findings highlight the need for length-aware evaluations of long-context capabilities. A more robust approach" } ] } ], "index": 8 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17561" } ] } ], "index": 9 } ], "page_size": [ 595, 841 ], "page_idx": 1 }, { "para_blocks": [ { "type": "image", "bbox": [ 73, 71, 286, 181 ], "blocks": [ { "bbox": [ 73, 71, 286, 181 ], "lines": [ { "bbox": [ 73, 71, 286, 181 ], "spans": [ { "bbox": [ 73, 71, 286, 181 ], "type": "image", "image_path": "37a8e96aca490640f6dd2505ea46d0f5f798cf56c99f34b6740a429b7a166387.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 191, 291, 277 ], "lines": [ { "bbox": [ 67, 191, 291, 277 ], "spans": [ { "bbox": [ 67, 191, 291, 277 ], "type": "text", "content": "Figure 2: Comparison between LLaMA 3.1-8B-Instruct and Qwen 2.5-7B-Instruct on the Counting Star task across varying text lengths. The dashed line represents the average score across all context lengths. LLaMA 3.1-8B-Instruct performs worse than Qwen 2.5-7B-Instruct on short texts but excels on extremely long texts, indicating its superior long-context extension capability." } ] } ], "index": 1, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "bbox": [ 67, 298, 290, 353 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 298, 290, 353 ], "spans": [ { "bbox": [ 67, 298, 290, 353 ], "type": "text", "content": "involves testing model performance on " }, { "bbox": [ 67, 298, 290, 353 ], "type": "inline_equation", "content": "N" }, { "bbox": [ 67, 298, 290, 353 ], "type": "text", "content": " samples across diverse lengths to obtain a comprehensive assessment of its long-context capability. More results of other methods are in Appendix A.1." } ] } ], "index": 2 }, { "bbox": [ 67, 361, 291, 660 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 361, 291, 660 ], "spans": [ { "bbox": [ 67, 361, 291, 660 ], "type": "text", "content": "Ineffectiveness of current metrics for evaluating long-context capability Evidenced by Figure 2, existing long-context metrics primarily rely on the average performance across the benchmark. However, this approach can be misleading as it conflates the model's inherent task-specific ability with its pure long-context capability. As illustrated in Figure 2, LLaMA 3.1-8B-Instruct performs worse than Qwen 2.5-7B-Instruct on short texts but excels on extremely long texts, such as " }, { "bbox": [ 67, 361, 291, 660 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 67, 361, 291, 660 ], "type": "text", "content": " and " }, { "bbox": [ 67, 361, 291, 660 ], "type": "inline_equation", "content": "255k" }, { "bbox": [ 67, 361, 291, 660 ], "type": "text", "content": ", indicating its superior long-context extension capability. In this task, the average performance suggests that Qwen 2.5-7B-Instruct is the better model. But a closer inspection reveals that LLaMA 3.1-8B-Instruct has a distinct advantage in handling extremely long texts, despite its weaker performance on shorter inputs. This discrepancy underscores the need to separate a model's base ability (on short texts) from its long-context capability. To address this, we propose a novel metric that accurately captures a model's true capability to handle long contexts from Base Ability." } ] } ], "index": 3 }, { "bbox": [ 67, 671, 256, 699 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 671, 256, 699 ], "spans": [ { "bbox": [ 67, 671, 256, 699 ], "type": "text", "content": "3 How to truly evaluate Language Models' long-context capability?" } ] } ], "index": 4 }, { "bbox": [ 67, 708, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 708, 291, 775 ], "spans": [ { "bbox": [ 67, 708, 291, 775 ], "type": "text", "content": "To address the two identified problems, we 1) construct a length-controllable long-context benchmark to reduce performance variance across task lengths, and 2) introduce LongScore, a new metric designed to accurately evaluate long-context" } ] } ], "index": 5 }, { "bbox": [ 302, 71, 526, 206 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 526, 206 ], "spans": [ { "bbox": [ 302, 71, 526, 206 ], "type": "text", "content": "capabilities by disentangling the model's baseline abilities. In detail, we restructure the long-context datasets, based on LongBench, L-EVAL, and other benchmarks. We then design a new pipeline to generate controllable-length long contexts by combining different articles. Additionally, we introduce a filtering mechanism in QA-related tasks to mitigate prior knowledge. Subsequently, We propose a new metric to isolate a model's long-text capability from Base Ability (performance on short texts)." } ] } ], "index": 6 }, { "bbox": [ 302, 216, 524, 229 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 216, 524, 229 ], "spans": [ { "bbox": [ 302, 216, 524, 229 ], "type": "text", "content": "3.1 Construct a new long-context benchmark" } ] } ], "index": 7 }, { "bbox": [ 301, 233, 526, 504 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 233, 526, 504 ], "spans": [ { "bbox": [ 301, 233, 526, 504 ], "type": "text", "content": "We categorize tasks into four types, each consisting of two tasks with different levels of difficulty, resulting in a total of eight tasks. The types and their corresponding tasks are: Key Retrieval (including KV Retrieval and Counting Stars), Information Retrieval (including Passage Retrieval and Passage Count), Information Comprehension (including Single-doc QA and Multi-doc QA) and Information Summarization (including Single-doc Sum and Multi-doc Sum). Table 2 provides details for each task, including: Real Context Sources(the original context of the question used in the task), Noisy Context Sources(the source of additional context that may contain irrelevant or distracting information) and Evaluation Metric(the metric used to assess model performance for each task). All of these datasets are from other benchmarks like LongBench, etc. Detailed information on context construction, question setup, and evaluation metrics, are in Appendix A.2." } ] } ], "index": 8 }, { "bbox": [ 302, 505, 525, 707 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 505, 525, 707 ], "spans": [ { "bbox": [ 302, 505, 525, 707 ], "type": "text", "content": "How to generate a controllable-length context? In LongBench, the context for each task is controllable, such as generating a context of approximately " }, { "bbox": [ 302, 505, 525, 707 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 302, 505, 525, 707 ], "type": "text", "content": " tokens. To achieve this, we first randomly select one article from Real Context Sources as the ground truth article. Then, we randomly sample a number of articles from Noisy Context Sources as distractor articles. These distractor articles are combined with the ground truth article to construct the whole context, ensuring that the total context length is close to but less than " }, { "bbox": [ 302, 505, 525, 707 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 302, 505, 525, 707 ], "type": "text", "content": ". Finally, the order of all articles is shuffled to create the context. Figure 3 illustrates the data generation process for Single-Doc QA task, showing how questions, answers, and contexts are prepared." } ] } ], "index": 9 }, { "bbox": [ 302, 708, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 708, 525, 775 ], "spans": [ { "bbox": [ 302, 708, 525, 775 ], "type": "text", "content": "QA Filtering Mechanism. For Multi-Doc QA and Single-Doc QA tasks, we introduce a filtering mechanism to eliminate the influence of the model's inherent prior knowledge. When evaluating a model's long-context capabilities, prior" } ] } ], "index": 10 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17562" } ] } ], "index": 11 } ], "page_size": [ 595, 841 ], "page_idx": 2 }, { "para_blocks": [ { "bbox": [ 67, 69, 526, 152 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 69, 526, 152 ], "spans": [ { "bbox": [ 67, 69, 526, 152 ], "type": "text", "content": "Table 2: Details of dataset construction for each task. To generate a context of a specified length like " }, { "bbox": [ 67, 69, 526, 152 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 67, 69, 526, 152 ], "type": "text", "content": ", we randomly select multiple articles from the Noisy Context Source datasets as distractor articles. A single article is randomly chosen from Real Context Source datasets as the ground truth article. Distractor articles and the ground truth article are combined to form the whole context, ensuring that the whole context length is less than " }, { "bbox": [ 67, 69, 526, 152 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 67, 69, 526, 152 ], "type": "text", "content": " and the order of all articles is shuffled. The bottom of the table contains different datasets from other benchmarks. N/A indicates that the task does not require Context Sources because the questions are synthetic rather than derived from a dataset. More details about how to construct each task are in Appendix A.2." } ] } ], "index": 0 }, { "type": "table", "bbox": [ 82, 153, 512, 277 ], "blocks": [ { "bbox": [ 82, 153, 512, 277 ], "lines": [ { "bbox": [ 82, 153, 512, 277 ], "spans": [ { "bbox": [ 82, 153, 512, 277 ], "type": "table", "html": "
Task NameReal Context SourcesNoisy Context SourcesEvaluation Metric
KV RetrievalN/A1 2 3 4 5 6 7 8 9Accuracy
Counting StarsN/A1 2 3 4 5 6 7 8 9Accuracy
Passage Retrieval9 10 11 12 13 14 159 10 11 12 13 14 15Accuracy
Passage Count1 2 3 4 5 6 7 8 9N/AAccuracy
Single-doc QA1 2 3 4 5 6 7 81 2 3 4 5 6 7 8LLM-based Metric
Multi-doc QA16 17 18 191 2 3 4 5 6 7 8LLM-based Metric
Single-doc Sum1 11 12 13 14 151 11 12 13 14 15LLM-based Metric
Multi-doc Sum201 11 12 13 14 15LLM-based Metric
", "image_path": "f0396b35f1dfeca0d1f666f27c0dde40f47b6c5f7b4df72c0826a7f7ce48fd8d.jpg" } ] } ], "index": 1, "angle": 0, "type": "table_body" } ], "index": 1 }, { "bbox": [ 114, 280, 480, 320 ], "type": "list", "angle": 0, "index": 5, "blocks": [ { "bbox": [ 115, 280, 473, 293 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 115, 280, 473, 293 ], "spans": [ { "bbox": [ 115, 280, 473, 293 ], "type": "text", "content": "① qasper ② multifieldqa_en ③ narrativeqa ④ multidoc_qa ⑤ legal_contract_qa" } ] } ], "index": 2 }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 114, 294, 480, 306 ], "spans": [ { "bbox": [ 114, 294, 480, 306 ], "type": "inline_equation", "content": "⑥" }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "content": " financial_qa " }, { "bbox": [ 114, 294, 480, 306 ], "type": "inline_equation", "content": "⑦" }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "content": " natural_question " }, { "bbox": [ 114, 294, 480, 306 ], "type": "inline_equation", "content": "⑧" }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "content": " scientific_qa " }, { "bbox": [ 114, 294, 480, 306 ], "type": "inline_equation", "content": "⑨" }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "content": " cnn_dailymail " }, { "bbox": [ 114, 294, 480, 306 ], "type": "inline_equation", "content": "⑩" }, { "bbox": [ 114, 294, 480, 306 ], "type": "text", "content": " gov_report" } ] } ], "index": 3 }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 115, 308, 478, 320 ], "spans": [ { "bbox": [ 115, 308, 478, 320 ], "type": "inline_equation", "content": "①" }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "content": " qmsum " }, { "bbox": [ 115, 308, 478, 320 ], "type": "inline_equation", "content": "⑫" }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "content": " patent_summ " }, { "bbox": [ 115, 308, 478, 320 ], "type": "inline_equation", "content": "⑬" }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "content": " tv_show_summ " }, { "bbox": [ 115, 308, 478, 320 ], "type": "inline_equation", "content": "⑭" }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "content": " review_summ " }, { "bbox": [ 115, 308, 478, 320 ], "type": "inline_equation", "content": "⑮" }, { "bbox": [ 115, 308, 478, 320 ], "type": "text", "content": " meeting_summ" } ] } ], "index": 4 } ], "sub_type": "text" }, { "bbox": [ 140, 322, 453, 333 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 140, 322, 453, 333 ], "spans": [ { "bbox": [ 140, 322, 453, 333 ], "type": "text", "content": "hotpotqa 172wikimqa 18musique 19rag-mini-bioasq 20 multi_news_e" } ] } ], "index": 6 }, { "type": "image", "bbox": [ 74, 358, 285, 438 ], "blocks": [ { "bbox": [ 74, 358, 285, 438 ], "lines": [ { "bbox": [ 74, 358, 285, 438 ], "spans": [ { "bbox": [ 74, 358, 285, 438 ], "type": "image", "image_path": "d9b644db399e693d46f2330cabaca890fef8dbe7012f16fa6fe522947770ca8e.jpg" } ] } ], "index": 7, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 450, 289, 475 ], "lines": [ { "bbox": [ 67, 450, 289, 475 ], "spans": [ { "bbox": [ 67, 450, 289, 475 ], "type": "text", "content": "Figure 3: Illustration of the Data Generation Process for the Single-Doc QA Task" } ] } ], "index": 8, "angle": 0, "type": "image_caption" } ], "index": 7 }, { "bbox": [ 67, 489, 290, 705 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 489, 290, 705 ], "spans": [ { "bbox": [ 67, 489, 290, 705 ], "type": "text", "content": "knowledge is often overlooked. For instance, in question-answering (QA) tasks, the model might memorize the answers to certain questions during pretraining. As shown in Figure 4, the model accurately answer questions based on its prior knowledge even without any contexts. In such cases, the model's response is not derived from the provided context but from its memorized knowledge. This oversight can lead to inflated performance metrics, misrepresenting the model's actual ability to process and comprehend long contexts. To filter out the model's prior knowledge, we introduce a QA filtering mechanism. In a no-context scenario, if the model's response score exceeds a certain threshold, it indicates that the model is relying on prior knowledge, showing the data should be excluded." } ] } ], "index": 9 }, { "bbox": [ 67, 708, 290, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 708, 290, 775 ], "spans": [ { "bbox": [ 67, 708, 290, 775 ], "type": "text", "content": "Although our length-controlled datasets are synthetically constructed, they are carefully designed to better reflect real-world usage scenarios, which we called as real-life reflective. Specifically, each instance is composed by selecting a task-relevant" } ] } ], "index": 10 }, { "type": "image", "bbox": [ 312, 361, 518, 468 ], "blocks": [ { "bbox": [ 312, 361, 518, 468 ], "lines": [ { "bbox": [ 312, 361, 518, 468 ], "spans": [ { "bbox": [ 312, 361, 518, 468 ], "type": "image", "image_path": "c36289c12a67639f5fd931fa6eb255f93f9ff00e30a0b625cba72c29852aaed5.jpg" } ] } ], "index": 11, "angle": 0, "type": "image_body" }, { "bbox": [ 302, 473, 524, 498 ], "lines": [ { "bbox": [ 302, 473, 524, 498 ], "spans": [ { "bbox": [ 302, 473, 524, 498 ], "type": "text", "content": "Figure 4: One sample in Question Answering where models provide accurate answers regardless of context" } ] } ], "index": 12, "angle": 0, "type": "image_caption" } ], "index": 11 }, { "bbox": [ 302, 518, 526, 641 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 518, 526, 641 ], "spans": [ { "bbox": [ 302, 518, 526, 641 ], "type": "text", "content": "example as the source (e.g., a summarization prompt and document), and padding it with additional samples that belong to the same domain or task type (e.g., other documents suitable for summarization). This construction ensures that all components of the input are contextually aligned and task-compatible, mimicking common usage patterns in long-context settings, such as concatenated inputs in retrieval-augmented generation pipelines." } ] } ], "index": 13 }, { "bbox": [ 302, 650, 508, 663 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 650, 508, 663 ], "spans": [ { "bbox": [ 302, 650, 508, 663 ], "type": "text", "content": "3.2 LongScore: a new long-context metric" } ] } ], "index": 14 }, { "bbox": [ 302, 666, 526, 761 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 666, 526, 761 ], "spans": [ { "bbox": [ 302, 666, 526, 761 ], "type": "text", "content": "As illustrated in Figure 2, directly using a model's scores across various text lengths to assess its long-context capability introduces inherent biases. To address this limitation, we propose a new metric that disentangles the model's base ability from its long-context capability, allowing for a more accurate and comprehensive evaluation." } ] } ], "index": 15 }, { "bbox": [ 314, 761, 524, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 314, 761, 524, 774 ], "spans": [ { "bbox": [ 314, 761, 524, 774 ], "type": "text", "content": "Base Ability. It refers to the model's score when" } ] } ], "index": 16 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17563" } ] } ], "index": 17 } ], "page_size": [ 595, 841 ], "page_idx": 3 }, { "para_blocks": [ { "type": "table", "bbox": [ 68, 165, 529, 275 ], "blocks": [ { "bbox": [ 67, 69, 526, 164 ], "lines": [ { "bbox": [ 67, 69, 526, 164 ], "spans": [ { "bbox": [ 67, 69, 526, 164 ], "type": "text", "content": "Table 3: Comparison of long-context benchmarks: Longbench (Bai et al., 2023), L-Eval (An et al., 2023), " }, { "bbox": [ 67, 69, 526, 164 ], "type": "inline_equation", "content": "\\infty" }, { "bbox": [ 67, 69, 526, 164 ], "type": "text", "content": "-Bench (Zhang et al., 2024), NIAH (Needle In A Haystack), RULER (Hsieh et al., 2024), Helmet (Yen et al., 2024), and our " }, { "bbox": [ 67, 69, 526, 164 ], "type": "inline_equation", "content": "100" }, { "bbox": [ 67, 69, 526, 164 ], "type": "text", "content": "-LongBench. L: input tokens. Controllable: The benchmark can generate contexts of specified lengths. Diverse Tasks: The tasks are varied and not limited to a single type. LLM-based Metric: Metrics in some tasks are designed based on large language models for more accurate evaluation. LC Distinction: Effectively separates the model's base ability from its long-text capability. QA Filter: Implements measures to remove the influence of the model's prior knowledge in QA tasks. The tasks in NIAH and RULER are synthetic, so they do not require LLM-based metrics or QA filtering." } ] } ], "index": 0, "angle": 0, "type": "table_caption" }, { "bbox": [ 68, 165, 529, 275 ], "lines": [ { "bbox": [ 68, 165, 529, 275 ], "spans": [ { "bbox": [ 68, 165, 529, 275 ], "type": "table", "html": "
DatasetL>128kControllableDiverse TasksLLM-based MetricLC distinctionQA Filter
LongbenchXXXXX
L-EValXXXX
∞-BenchXXXX
NIAHXX
RULERX
HelmetXX
100-LongBench
", "image_path": "4ead71bb03b5bc3ae2ac6e1bf63a8666590073266fe6e8bd2040b0af24ea88d3.jpg" } ] } ], "index": 1, "angle": 0, "type": "table_body" } ], "index": 1 }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 296, 290, 363 ], "spans": [ { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": "conducting short-context tasks. To estimate Base Ability, we sample " }, { "bbox": [ 67, 296, 290, 363 ], "type": "inline_equation", "content": "N" }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": " instances from short text lengths (like " }, { "bbox": [ 67, 296, 290, 363 ], "type": "inline_equation", "content": "2k" }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": ", " }, { "bbox": [ 67, 296, 290, 363 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": ", " }, { "bbox": [ 67, 296, 290, 363 ], "type": "inline_equation", "content": "6k" }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": "). For each length, " }, { "bbox": [ 67, 296, 290, 363 ], "type": "inline_equation", "content": "N/3" }, { "bbox": [ 67, 296, 290, 363 ], "type": "text", "content": " samples are selected, and the model's average score across these lengths is computed:" } ] } ], "index": 2 }, { "bbox": [ 103, 373, 290, 399 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 103, 373, 290, 399 ], "spans": [ { "bbox": [ 103, 373, 290, 399 ], "type": "interline_equation", "content": "\\text {B a s e A b i l i t y} = \\frac {S _ {2 k} + S _ {4 k} + S _ {6 k}}{3} \\tag {1}", "image_path": "679fc16266d35ca4ef2d2f62872ea5f8b3a88e4be0a9c87fca66ecd85fcc772a.jpg" } ] } ], "index": 3 }, { "bbox": [ 67, 404, 289, 432 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 404, 289, 432 ], "spans": [ { "bbox": [ 67, 404, 289, 432 ], "type": "text", "content": "where " }, { "bbox": [ 67, 404, 289, 432 ], "type": "inline_equation", "content": "S_{*k}" }, { "bbox": [ 67, 404, 289, 432 ], "type": "text", "content": " represents the performance of model with the " }, { "bbox": [ 67, 404, 289, 432 ], "type": "inline_equation", "content": "* - k" }, { "bbox": [ 67, 404, 289, 432 ], "type": "text", "content": " length." } ] } ], "index": 4 }, { "bbox": [ 67, 432, 290, 486 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 432, 290, 486 ], "spans": [ { "bbox": [ 67, 432, 290, 486 ], "type": "inline_equation", "content": "\\mathrm{LongScore}(\\mathrm{LC}_l)" }, { "bbox": [ 67, 432, 290, 486 ], "type": "text", "content": " is our proposed metric. For longer lengths (e.g., 8k, 16k, 32k), we calculate the score on " }, { "bbox": [ 67, 432, 290, 486 ], "type": "inline_equation", "content": "N" }, { "bbox": [ 67, 432, 290, 486 ], "type": "text", "content": " instances for each length. " }, { "bbox": [ 67, 432, 290, 486 ], "type": "inline_equation", "content": "\\mathrm{LC}_l" }, { "bbox": [ 67, 432, 290, 486 ], "type": "text", "content": " at a given length " }, { "bbox": [ 67, 432, 290, 486 ], "type": "inline_equation", "content": "l" }, { "bbox": [ 67, 432, 290, 486 ], "type": "text", "content": " is then defined as:" } ] } ], "index": 5 }, { "bbox": [ 120, 496, 290, 525 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 120, 496, 290, 525 ], "spans": [ { "bbox": [ 120, 496, 290, 525 ], "type": "interline_equation", "content": "\\mathrm {L C} _ {l} = \\frac {S _ {l} - \\text {B a s e A b i l i t y}}{\\text {B a s e A b i l i t y}} \\tag {2}", "image_path": "daf52eaa9862e80332aff47e56275aa1b9597341801ff1ea9a873a44b1b43455.jpg" } ] } ], "index": 6 }, { "bbox": [ 67, 529, 291, 637 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 529, 291, 637 ], "spans": [ { "bbox": [ 67, 529, 291, 637 ], "type": "text", "content": "LongScore separates the model's Base Ability from Long-context Capability. Our metric focuses on the relative improvement or decline at longer lengths and provides a more precise assessment of long-context capabilities without being influenced by the model's Base Ability. It enables consistent and unbiased comparisons of long-context capabilities across different models and datasets." } ] } ], "index": 7 }, { "bbox": [ 67, 648, 239, 661 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 648, 239, 661 ], "spans": [ { "bbox": [ 67, 648, 239, 661 ], "type": "text", "content": "3.3 Compare to other benchmarks" } ] } ], "index": 8 }, { "bbox": [ 67, 666, 290, 719 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 666, 290, 719 ], "spans": [ { "bbox": [ 67, 666, 290, 719 ], "type": "text", "content": "This section compares other long-context benchmarks with 100-LongBench, highlighting their similarities and differences. The overall distinctions between benchmarks are presented in Table 3." } ] } ], "index": 9 }, { "bbox": [ 73, 721, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 73, 721, 291, 775 ], "spans": [ { "bbox": [ 73, 721, 291, 775 ], "type": "text", "content": "- LongBench (Bai et al., 2023) is an early benchmark used to evaluate the long-context capabilities. It was the first to use a variety of tasks for evaluation, but the context length is generally" } ] } ], "index": 10 }, { "bbox": [ 316, 296, 526, 349 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 316, 296, 526, 349 ], "spans": [ { "bbox": [ 316, 296, 526, 349 ], "type": "text", "content": "limited to around " }, { "bbox": [ 316, 296, 526, 349 ], "type": "inline_equation", "content": "8k" }, { "bbox": [ 316, 296, 526, 349 ], "type": "text", "content": ", and the length distribution is uneven. As many current LLMs support context lengths of 128k and beyond, these benchmarks are no longer suitable." } ] } ], "index": 11 }, { "bbox": [ 308, 356, 525, 698 ], "type": "list", "angle": 0, "index": 16, "blocks": [ { "bbox": [ 308, 356, 525, 437 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 356, 525, 437 ], "spans": [ { "bbox": [ 308, 356, 525, 437 ], "type": "text", "content": "- " }, { "bbox": [ 308, 356, 525, 437 ], "type": "inline_equation", "content": "\\infty" }, { "bbox": [ 308, 356, 525, 437 ], "type": "text", "content": "-Bench (Zhang et al., 2024) and L-Eval (An et al., 2023) are an improvement over benchmarks like LongBench, increasing the data length to over " }, { "bbox": [ 308, 356, 525, 437 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 308, 356, 525, 437 ], "type": "text", "content": ". However, the context length is not controllable, which limits its ability to comprehensively evaluate LLMs." } ] } ], "index": 12 }, { "bbox": [ 308, 443, 525, 523 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 443, 525, 523 ], "spans": [ { "bbox": [ 308, 443, 525, 523 ], "type": "text", "content": "- NIAH and RULER (Hsieh et al., 2024) are designed with controllable context lengths and can control the position of the answer, specifically for evaluating long-context capabilities. These benchmarks are currently the leading tools to assess the long-context capabilities of LLMs." } ] } ], "index": 13 }, { "bbox": [ 308, 530, 525, 611 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 530, 525, 611 ], "spans": [ { "bbox": [ 308, 530, 525, 611 ], "type": "text", "content": "- Helmet (Yen et al., 2024) is a newly proposed benchmark that not only allows for controllable context lengths but also designs a wide variety of tasks. It introduces the use of LLM-based metrics, providing a more refined way to evaluate long-context capabilities." } ] } ], "index": 14 }, { "bbox": [ 308, 618, 525, 698 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 308, 618, 525, 698 ], "spans": [ { "bbox": [ 308, 618, 525, 698 ], "type": "text", "content": "- Long-Bench generates controllable context-length tasks. Additionally, it introduces a new metric to distinguish between a model's base ability and long-context capability, offering a more comprehensive and novel approach to evaluating long-context capabilities." } ] } ], "index": 15 } ], "sub_type": "text" }, { "bbox": [ 302, 711, 441, 725 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 711, 441, 725 ], "spans": [ { "bbox": [ 302, 711, 441, 725 ], "type": "text", "content": "4 Experimental Analysis" } ] } ], "index": 17 }, { "bbox": [ 302, 735, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 735, 526, 775 ], "spans": [ { "bbox": [ 302, 735, 526, 775 ], "type": "text", "content": "In this section, we conduct comprehensive experiments to first validate the reliability of LongBench and the effectiveness of the proposed" } ] } ], "index": 18 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17564" } ] } ], "index": 19 } ], "page_size": [ 595, 841 ], "page_idx": 4 }, { "para_blocks": [ { "type": "image", "bbox": [ 70, 69, 220, 200 ], "blocks": [ { "bbox": [ 70, 69, 220, 200 ], "lines": [ { "bbox": [ 70, 69, 220, 200 ], "spans": [ { "bbox": [ 70, 69, 220, 200 ], "type": "image", "image_path": "6af97086899bf18a9a48222eaaf6ab0e2c7c49c8b2e4e348df93cd65dfb7e578.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 66, 202, 526, 239 ], "lines": [ { "bbox": [ 66, 202, 526, 239 ], "spans": [ { "bbox": [ 66, 202, 526, 239 ], "type": "text", "content": "Figure 5: Verification of the reliability of LongBench: results of two models of different sizes from the same LM family tree, showcasing their average scores in different tasks. These findings confirm a well-established trend: within the same series, larger models generally outperform smaller ones, reinforcing the reliability of LongBench." } ] } ], "index": 3, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "type": "image", "bbox": [ 223, 69, 371, 200 ], "blocks": [ { "bbox": [ 223, 69, 371, 200 ], "lines": [ { "bbox": [ 223, 69, 371, 200 ], "spans": [ { "bbox": [ 223, 69, 371, 200 ], "type": "image", "image_path": "b29fb68a76763e5c747fdbf570b74b271825f88461a8e19bd77615f78ce3c5dc.jpg" } ] } ], "index": 1, "angle": 0, "type": "image_body" } ], "index": 1 }, { "type": "image", "bbox": [ 375, 69, 524, 200 ], "blocks": [ { "bbox": [ 375, 69, 524, 200 ], "lines": [ { "bbox": [ 375, 69, 524, 200 ], "spans": [ { "bbox": [ 375, 69, 524, 200 ], "type": "image", "image_path": "4639c59128a6a934de081ee55f1ce1c520bbf640ec1942853b7af58c63ea2f93.jpg" } ] } ], "index": 2, "angle": 0, "type": "image_body" } ], "index": 2 }, { "type": "table", "bbox": [ 70, 414, 289, 484 ], "blocks": [ { "bbox": [ 66, 258, 291, 413 ], "lines": [ { "bbox": [ 66, 258, 291, 413 ], "spans": [ { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": "Table 4: Results of the average performance of five models across all tasks on " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "\\underline{100}" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": "-LongBench. Base Ability represents the model's score within lengths of " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "2k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ", " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": " and " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "6k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ". Avg score represents the average of score across lengths including " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "8k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ", " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "16k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ", " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "32k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ", " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "64k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": " and " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": ". Avg LC represents the average of score by using our proposed metric, LongScore. " }, { "bbox": [ 66, 258, 291, 413 ], "type": "inline_equation", "content": "59.1_{(1)}" }, { "bbox": [ 66, 258, 291, 413 ], "type": "text", "content": " indicates that the current model has a score of 59.1 at the given context length, with a ranking of 1. Claimed Length refers to the maximum context length that the model claims to support. Qwen 2.5-14B and Qwen 2.5-7B use YaRN to extend their context length to 128k. The original context length is specified in Claimed Length." } ] } ], "index": 4, "angle": 0, "type": "table_caption" }, { "bbox": [ 70, 414, 289, 484 ], "lines": [ { "bbox": [ 70, 414, 289, 484 ], "spans": [ { "bbox": [ 70, 414, 289, 484 ], "type": "table", "html": "
ModelClaimed LengthBase AbilityAvg socreAvg LC
Qwen2.5-14B-Instruct32K59.1(1)40.7(1)-31.1(4)
Qwen2.5-7B-Instruct32K57.4(2)39.8(2)-30.6(3)
Llama3.1-8B-Instruct128K44.0(3)36.3(3)-17.4(1)
Llama3.2-1B-Instruct128K28.7(4)20.4(4)-28.8(2)
", "image_path": "298f6f6761af0a1f15961b5f52c9acad4ddc7b812d12de535dad07e56f94e2ea.jpg" } ] } ], "index": 5, "angle": 0, "type": "table_body" } ], "index": 5 }, { "bbox": [ 67, 497, 291, 523 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 497, 291, 523 ], "spans": [ { "bbox": [ 67, 497, 291, 523 ], "type": "text", "content": "metric. They are then used to evaluate the longcontext capabilities of several open-source models." } ] } ], "index": 6 }, { "bbox": [ 67, 538, 256, 564 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 538, 256, 564 ], "spans": [ { "bbox": [ 67, 538, 256, 564 ], "type": "text", "content": "4.1 Verification of the reliability of the proposed benchmark" } ] } ], "index": 7 }, { "bbox": [ 67, 571, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 571, 291, 775 ], "spans": [ { "bbox": [ 67, 571, 291, 775 ], "type": "text", "content": "To verify the reliability of LongBench, we evaluate three model families (Llama 3.2, Llama 3.1, and Phi 3), selecting two different model sizes from each family. Since these are models of different sizes within the same series, the expected trend in the dataset would be: for the same series, larger models generally perform better in all tasks across different context lengths. As shown in Figure 5, this overall trend is observed, indicating that the dataset generation itself is reliable and can be used for evaluating long-context capabilities. For instance, compare to Llama 3.2-1B-Instruct, Llama 3.2-3B-Instruct gets higher average scores in each task. For more detailed results of models across various context lengths, refer to Appendix A.4." } ] } ], "index": 8 }, { "type": "image", "bbox": [ 317, 260, 509, 408 ], "blocks": [ { "bbox": [ 317, 260, 509, 408 ], "lines": [ { "bbox": [ 317, 260, 509, 408 ], "spans": [ { "bbox": [ 317, 260, 509, 408 ], "type": "image", "image_path": "d78b4e9933ea56a782c4958eb0c5e205c2140a1f68525f7e70c39e270b09c3f9.jpg" } ] } ], "index": 9, "angle": 0, "type": "image_body" }, { "bbox": [ 302, 411, 525, 447 ], "lines": [ { "bbox": [ 302, 411, 525, 447 ], "spans": [ { "bbox": [ 302, 411, 525, 447 ], "type": "text", "content": "Figure 6: Results of four open-source models on all tasks in " }, { "bbox": [ 302, 411, 525, 447 ], "type": "inline_equation", "content": "100" }, { "bbox": [ 302, 411, 525, 447 ], "type": "text", "content": "-LongBench, showing their average scores of all eight tasks at different context lengths." } ] } ], "index": 10, "angle": 0, "type": "image_caption" } ], "index": 9 }, { "bbox": [ 302, 458, 504, 485 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 458, 504, 485 ], "spans": [ { "bbox": [ 302, 458, 504, 485 ], "type": "text", "content": "4.2 Verification of the effectiveness of the proposed metric" } ] } ], "index": 11 }, { "bbox": [ 301, 490, 526, 678 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 490, 526, 678 ], "spans": [ { "bbox": [ 301, 490, 526, 678 ], "type": "text", "content": "Following the setting of Lu et al. (2024), we compare two long-context enhancement methods, NTK and PI, using LongBench and 100-LongBench. On 100-LongBench, we evaluate performances with two metrics: score and LongScore " }, { "bbox": [ 301, 490, 526, 678 ], "type": "inline_equation", "content": "(LC)" }, { "bbox": [ 301, 490, 526, 678 ], "type": "text", "content": ". We include three evaluations to further validate the discriminative power and practical value of our proposed LongScore metric. These comparisons were chosen to reflect real-world modeling choices and align with community intuition: (1) NTK vs. PI on long-context tasks, (2) performance of LLaMA3-8B-Instruct under different RoPE theta ratios, and (3) Gemini-1.5 model variants like Gemini-1.5-Flash and Gemini-1.5-Pro from HEMLET benchmark." } ] } ], "index": 12 }, { "bbox": [ 302, 681, 527, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 681, 527, 775 ], "spans": [ { "bbox": [ 302, 681, 527, 775 ], "type": "text", "content": "There are some reasons why we choose these three comparisons: (1) NTK and PI are chosen for comparison because it is well-established that NTK provides a more fine-grained extension of PI. (2) We choose LLaMA 3-8B-Instruct (8k claimed context length) with different RoPE theta ratios. Generally speaking, appropriately increasing the RoPE" } ] } ], "index": 13 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17565" } ] } ], "index": 14 } ], "page_size": [ 595, 841 ], "page_idx": 5 }, { "para_blocks": [ { "type": "table", "bbox": [ 70, 117, 523, 172 ], "blocks": [ { "bbox": [ 67, 69, 526, 116 ], "lines": [ { "bbox": [ 67, 69, 526, 116 ], "spans": [ { "bbox": [ 67, 69, 526, 116 ], "type": "text", "content": "Table 5: Results of 4 models' ranking in Ruler(Hsieh et al., 2024) on different metrics. Base Ability represents the model's score with a 4k-length context. Avg represents the average of scores excluding the base score. " }, { "bbox": [ 67, 69, 526, 116 ], "type": "inline_equation", "content": "95.8_{(1)}" }, { "bbox": [ 67, 69, 526, 116 ], "type": "text", "content": " indicates that the current model has a score of 95.8 at the given context length, with a ranking of 1. LC represents the score by our proposed metric, LongScore." } ] } ], "index": 0, "angle": 0, "type": "table_caption" }, { "bbox": [ 70, 117, 523, 172 ], "lines": [ { "bbox": [ 70, 117, 523, 172 ], "spans": [ { "bbox": [ 70, 117, 523, 172 ], "type": "table", "html": "
ModelsClaimed LengthBase Ability8k16k32k64k128kAvg
scoreLCscoreLCscoreLCscoreLCscoreLCscoreLC
Llama3.1 (70B)128K96.5(1)95.8(1)-0.7(2)95.4(1)-1.1(1)94.8(1)-1.7(1)88.4(1)-8.3(1)66.6(2)-30.9(3)88.2(1)-8.6(2)
Yi (34B (Young et al., 2024))200K93.3(2)92.2(3)-1.1(3)91.3(2)-2.1(2)87.5(2)-6.2(2)83.2(2)-10.8(2)77.3(1)-17.1(1)86.3(2)-7.5(1)
Phi-medium (14B)128K93.3(3)93.2(2)-0.1(1)91.1(2)-2.3(3)86.8(3)-6.9(3)78.6(3)-15.7(3)46.1(4)-50.5(4)79.1(3)-15.1(4)
LWM (7B) (Liu et al., 2024a)1M82.3(4)78.4(4)-4.70(4)73.7(4)-10.4(4)69.1(4)-16.0(4)68.1(4)-17.2(4)65.0(3)-21.0(2)70.8(4)-13.9(3)
", "image_path": "8e29bbffdc9536beb016c594444a85474bc7cf86e0ab5414783831553d271f1f.jpg" } ] } ], "index": 1, "angle": 0, "type": "table_body" } ], "index": 1 }, { "type": "table", "bbox": [ 70, 256, 523, 342 ], "blocks": [ { "bbox": [ 66, 183, 526, 255 ], "lines": [ { "bbox": [ 66, 183, 526, 255 ], "spans": [ { "bbox": [ 66, 183, 526, 255 ], "type": "text", "content": "Table 6: Comparison of models and methods under our proposed LongScore metric. We present three evaluations to validate the discriminative power of LongScore: (1) NTK vs. PI on 100-LongBench; (2) LLaMA3-8B with different RoPE theta ratios; (3) Gemini-1.5 variants from the HEMLET benchmark. In all cases, LongScore reflects performance differences that align with common understanding (e.g., NTK > PI, Gemini-Pro > Gemini-Flash), while amplifying meaningful gaps that are not visible with raw accuracy. The results highlight the discriminative ability and effectiveness of our proposed benchmark and metric." } ] } ], "index": 2, "angle": 0, "type": "table_caption" }, { "bbox": [ 70, 256, 523, 342 ], "lines": [ { "bbox": [ 70, 256, 523, 342 ], "spans": [ { "bbox": [ 70, 256, 523, 342 ], "type": "table", "html": "
BenchmarkModel / Methodbase8k16k24k / 32k48k / 64k128k / 256kavg(score)avg(LONGSCORE)
100-LongBenchPI19.1816.4717.6717.1017.670.4413.87-27.68
NTK19.3915.7216.5316.7017.1712.8815.83-18.40
100-LongBenchLLaMA3-8B (ratio=1)35.3737.081.451.870.520.997.13-79.84
LLaMA3-8B (ratio=64)32.5231.9425.3426.0826.941.6318.83-42.12
HEMLETGemini-1.5-Flash59.6-60.258.155.050.756.00-6.04
Gemini-1.5-Pro59.5-60.159.957.054.157.77-2.90
", "image_path": "e65cf30e28aba0665d6f00d621339dff5f9b857137d8e19be63fda58cc6d6d10.jpg" } ] } ], "index": 3, "angle": 0, "type": "table_body" } ], "index": 3 }, { "bbox": [ 67, 362, 290, 417 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 362, 290, 417 ], "spans": [ { "bbox": [ 67, 362, 290, 417 ], "type": "text", "content": "theta improves the model's long context capability (within a reasonable extent). (3) we choose Gemini-1.5-Flash and Gemini-1.5-Pro because they have an obvious difference in long-context ability." } ] } ], "index": 4 }, { "bbox": [ 67, 417, 291, 593 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 417, 291, 593 ], "spans": [ { "bbox": [ 67, 417, 291, 593 ], "type": "text", "content": "On the LongBench tasks, both NTK and PI methods perform similarly, failing to provide a clear distinction. However, as shown in Table 6, on LongBench, the differences between NTK and PI became much more apparent across the selected tasks, effectively differentiating the two methods. Moreover, it is obvious that the differences of NTK and PI measured by LongScore are greater than those measured by the traditional metric, showing that LongScore demonstrates a greater ability to highlight these differences compared to the traditional metric and a more effective tool for distinguishing long-context capabilities." } ] } ], "index": 5 }, { "bbox": [ 67, 594, 291, 715 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 594, 291, 715 ], "spans": [ { "bbox": [ 67, 594, 291, 715 ], "type": "text", "content": "In other pairwise comparison, LongScore readings show a much more pronounced difference compared to the original scoring metrics of the datasets, while the win-loss order remains consistent with our general understanding of a model or method's long context capability (NTK > PI, ratio = 64 > ratio = 1, Gemini-1.5-Pro > Gemini-1.5-Flash). These results highlight the discriminative power and effectiveness of our LongScore." } ] } ], "index": 6 }, { "bbox": [ 67, 716, 268, 741 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 716, 268, 741 ], "spans": [ { "bbox": [ 67, 716, 268, 741 ], "type": "text", "content": "4.3 Experiments on frontier open-source LLMs" } ] } ], "index": 7 }, { "bbox": [ 67, 748, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 748, 291, 775 ], "spans": [ { "bbox": [ 67, 748, 291, 775 ], "type": "text", "content": "This section introduces the experiments conducted using 100-LongBench and the proposed met" } ] } ], "index": 8 }, { "bbox": [ 302, 362, 525, 389 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 362, 525, 389 ], "spans": [ { "bbox": [ 302, 362, 525, 389 ], "type": "text", "content": "ric, aimed at evaluating the long-context capabilities of various popular open-source large models." } ] } ], "index": 9 }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 391, 525, 527 ], "spans": [ { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": "We select four models, due to GPU resource limitations, as they can be used to generate outputs with a " }, { "bbox": [ 301, 391, 525, 527 ], "type": "inline_equation", "content": "256k" }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": " context length. For each of the eight tasks, we generated 100 samples at each context length " }, { "bbox": [ 301, 391, 525, 527 ], "type": "inline_equation", "content": "(8k, 16k, 32k, 64k, 128k)" }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": " to obtain the scores, using the performance at " }, { "bbox": [ 301, 391, 525, 527 ], "type": "inline_equation", "content": "2k" }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": ", " }, { "bbox": [ 301, 391, 525, 527 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": ", and " }, { "bbox": [ 301, 391, 525, 527 ], "type": "inline_equation", "content": "6k" }, { "bbox": [ 301, 391, 525, 527 ], "type": "text", "content": " as Base Ability. Finally, the average scores across all tasks are computed. Table 4 presents average results and the corresponding rankings. Figure 6 displays average scores at each context length." } ] } ], "index": 10 }, { "bbox": [ 302, 529, 526, 731 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 529, 526, 731 ], "spans": [ { "bbox": [ 302, 529, 526, 731 ], "type": "text", "content": "Here we explain why we choose the appropriate context lengths (e.g. 2k, 4k, 6k) for measuring Base Ability. We evaluate 8 models spanning the LLaMA 3.1, Phi-3, and Qwen 2.5 families. These models typically undergo pretraining with context lengths of 4K or 8K tokens before undergoing further continuous pretraining for long-context extension. Given this, we generalize that most models in our study have a pre-extension context window of either 4K or 8K. To probe their base reasoning ability, we evaluate performance under 2K, 4K, and 6K context lengths. These values are chosen to provide representative coverage of the model's original pretraining range without exceeding it, thereby offering a stable measure of Base Ability." } ] } ], "index": 11 }, { "bbox": [ 302, 735, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 735, 525, 775 ], "spans": [ { "bbox": [ 302, 735, 525, 775 ], "type": "text", "content": "Interestingly, as shown in Table 4, the rankings obtained by the traditional metric are almost identical to the rankings based on Base Ability." } ] } ], "index": 12 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17566" } ] } ], "index": 13 } ], "page_size": [ 595, 841 ], "page_idx": 6 }, { "para_blocks": [ { "type": "image", "bbox": [ 70, 71, 183, 185 ], "blocks": [ { "bbox": [ 70, 71, 183, 185 ], "lines": [ { "bbox": [ 70, 71, 183, 185 ], "spans": [ { "bbox": [ 70, 71, 183, 185 ], "type": "image", "image_path": "bb313bc075a14be6ea89c195feca44e08c3292082c845f31a6b89864880e6b85.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 187, 525, 223 ], "lines": [ { "bbox": [ 67, 187, 525, 223 ], "spans": [ { "bbox": [ 67, 187, 525, 223 ], "type": "text", "content": "Figure 7: Results of eight open-source models on eight tasks are presented, with their scores calculated using LongScore metric. Each marker represents a single model. The darker the color of the line, the stronger the base ability of the model." } ] } ], "index": 4, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "type": "image", "bbox": [ 189, 71, 297, 186 ], "blocks": [ { "bbox": [ 189, 71, 297, 186 ], "lines": [ { "bbox": [ 189, 71, 297, 186 ], "spans": [ { "bbox": [ 189, 71, 297, 186 ], "type": "image", "image_path": "c229f225f1a7503d19ddc8d65a8f95d55223e6e080251e92021130d79db44b4a.jpg" } ] } ], "index": 1, "angle": 0, "type": "image_body" } ], "index": 1 }, { "type": "image", "bbox": [ 298, 71, 411, 186 ], "blocks": [ { "bbox": [ 298, 71, 411, 186 ], "lines": [ { "bbox": [ 298, 71, 411, 186 ], "spans": [ { "bbox": [ 298, 71, 411, 186 ], "type": "image", "image_path": "26e42a10b07791d5a1785de1cb7db4ea0bc467321cd985822ef32f43adc271a2.jpg" } ] } ], "index": 2, "angle": 0, "type": "image_body" } ], "index": 2 }, { "type": "image", "bbox": [ 411, 71, 524, 186 ], "blocks": [ { "bbox": [ 411, 71, 524, 186 ], "lines": [ { "bbox": [ 411, 71, 524, 186 ], "spans": [ { "bbox": [ 411, 71, 524, 186 ], "type": "image", "image_path": "92f1964e4db8a407e1d006cf834a04dd1c36b64a35fac1cedb8208c580645511.jpg" } ] } ], "index": 3, "angle": 0, "type": "image_body" } ], "index": 3 }, { "type": "image", "bbox": [ 70, 234, 205, 407 ], "blocks": [ { "bbox": [ 70, 234, 205, 407 ], "lines": [ { "bbox": [ 70, 234, 205, 407 ], "spans": [ { "bbox": [ 70, 234, 205, 407 ], "type": "image", "image_path": "0fee1c0d5968e817e22ba8f435a2de7263e6bae1fa28182928d933c697dba8dc.jpg" } ] } ], "index": 5, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 416, 525, 441 ], "lines": [ { "bbox": [ 67, 416, 525, 441 ], "spans": [ { "bbox": [ 67, 416, 525, 441 ], "type": "text", "content": "Figure 8: Results of eight models on " }, { "bbox": [ 67, 416, 525, 441 ], "type": "inline_equation", "content": "100" }, { "bbox": [ 67, 416, 525, 441 ], "type": "text", "content": "-LongBench by using LongScore metric. The gray shading indicates either anomalous models' scores or cases where the model is unable to generate outputs for " }, { "bbox": [ 67, 416, 525, 441 ], "type": "inline_equation", "content": "256k" }, { "bbox": [ 67, 416, 525, 441 ], "type": "text", "content": "-long contexts." } ] } ], "index": 9, "angle": 0, "type": "image_caption" } ], "index": 5 }, { "type": "image", "bbox": [ 205, 234, 295, 407 ], "blocks": [ { "bbox": [ 205, 234, 295, 407 ], "lines": [ { "bbox": [ 205, 234, 295, 407 ], "spans": [ { "bbox": [ 205, 234, 295, 407 ], "type": "image", "image_path": "098aebfaa00305a0ff2e213c2ea10b732e9dadc3c7d34a2221a453be932a1e1e.jpg" } ] } ], "index": 6, "angle": 0, "type": "image_body" } ], "index": 6 }, { "type": "image", "bbox": [ 297, 234, 400, 407 ], "blocks": [ { "bbox": [ 297, 234, 400, 407 ], "lines": [ { "bbox": [ 297, 234, 400, 407 ], "spans": [ { "bbox": [ 297, 234, 400, 407 ], "type": "image", "image_path": "27a44e1b3a1895302bf818b0fe598139d48b0894a248ff8c7920d7b622cacf66.jpg" } ] } ], "index": 7, "angle": 0, "type": "image_body" } ], "index": 7 }, { "type": "image", "bbox": [ 401, 234, 524, 407 ], "blocks": [ { "bbox": [ 401, 234, 524, 407 ], "lines": [ { "bbox": [ 401, 234, 524, 407 ], "spans": [ { "bbox": [ 401, 234, 524, 407 ], "type": "image", "image_path": "3eae947a0a306b9d5980742558e5b048b71d4da8ee66b16693e8e140ab911fef.jpg" } ] } ], "index": 8, "angle": 0, "type": "image_body" } ], "index": 8 }, { "bbox": [ 67, 462, 291, 678 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 462, 291, 678 ], "spans": [ { "bbox": [ 67, 462, 291, 678 ], "type": "text", "content": "However, rankings using LongScore metric show a significant difference from Base Ability rankings, as demonstrated by models like Qwen 2.5-14B-Instruct and Qwen 2.5-7B-Instruct. From Figure 6, it can be observed that while these two models have higher scores at shorter context lengths (e.g. 8k, 16k), their scores drop significantly at longer context lengths (128k, 256k). This indicates that current long-text evaluation metrics are heavily influenced by Base Ability, while LongScore (the metric proposed in this paper) separates base ability from long-context capability, providing a more accurate reflection of the model's long-context performance. For comparisons of more open-source models on " }, { "bbox": [ 67, 462, 291, 678 ], "type": "inline_equation", "content": "\\underline{100}" }, { "bbox": [ 67, 462, 291, 678 ], "type": "text", "content": "-LongBench and their long-context capability evaluation, please refer to Appendix A.5." } ] } ], "index": 10 }, { "bbox": [ 67, 679, 291, 746 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 679, 291, 746 ], "spans": [ { "bbox": [ 67, 679, 291, 746 ], "type": "text", "content": "We also present the results of eight models from four LLM family trees (Llama 3.1, Llama 3.2, Qwen 2.5 and Phi 3) on LongBench. The evaluation uses LongScore metric and the detailed results about each task are shown in Figure 7 and Figure 8." } ] } ], "index": 11 }, { "bbox": [ 67, 748, 291, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 748, 291, 774 ], "spans": [ { "bbox": [ 67, 748, 291, 774 ], "type": "text", "content": "Long-context ability is important in certain specialized domains such as healthcare and law. To" } ] } ], "index": 12 }, { "bbox": [ 302, 462, 526, 556 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 462, 526, 556 ], "spans": [ { "bbox": [ 302, 462, 526, 556 ], "type": "text", "content": "this end, we additionally include several domain-specific long-context tasks, including Medical-Summary, MedOdyssey (Fan et al., 2024), and CaseSumm (Heddaya et al., 2024). We re-evaluate the performance of the LLaMA 3.2-1B-Instruct model with and without these datasets. The detailed results are shown in Appendix A.6." } ] } ], "index": 13 }, { "bbox": [ 302, 567, 501, 592 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 567, 501, 592 ], "spans": [ { "bbox": [ 302, 567, 501, 592 ], "type": "text", "content": "4.4 Experiments on Ruler with different metrics" } ] } ], "index": 14 }, { "bbox": [ 301, 599, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 599, 526, 775 ], "spans": [ { "bbox": [ 301, 599, 526, 775 ], "type": "text", "content": "We utilize data from Ruler (Hsieh et al., 2024), using a " }, { "bbox": [ 301, 599, 526, 775 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 301, 599, 526, 775 ], "type": "text", "content": "-length context to represent the model's base ability. The results are shown in Table 5, where we evaluate four models' performance at different context lengths using both LongScore and the traditional metric. Compared to LLaMA 3.1 (70B), Yi (34B) (Young et al., 2024) has a slightly lower overall score before reaching 128k context length, but at 128k, Yi (34B) performs significantly better. Similarly, compared to Phi3-medium (14B), LWM (7B) shows lower base ability and shorter text handling but clearly outperforms Phi3-medium at 128k. If ranking is based solely on scores," } ] } ], "index": 15 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17567" } ] } ], "index": 16 } ], "page_size": [ 595, 841 ], "page_idx": 7 }, { "para_blocks": [ { "bbox": [ 67, 71, 290, 126 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 71, 290, 126 ], "spans": [ { "bbox": [ 67, 71, 290, 126 ], "type": "text", "content": "LLaMA 3.1 (70B) and Phi3-medium (14B) would be ranked higher than their counterparts, but this does not show their true long-context capabilities. By using LongScore, we correct this discrepancy." } ] } ], "index": 0 }, { "bbox": [ 67, 130, 166, 142 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 130, 166, 142 ], "spans": [ { "bbox": [ 67, 130, 166, 142 ], "type": "text", "content": "5 Related Works" } ] } ], "index": 1 }, { "bbox": [ 67, 152, 290, 273 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 152, 290, 273 ], "spans": [ { "bbox": [ 67, 152, 290, 273 ], "type": "text", "content": "In this section, we review relevant prior research connected to our study. We summarize cutting-edge models known for their strong long-text processing capabilities, explore methods designed to enhance these abilities, and examine the benchmarks commonly used to assess long-text proficiency. Additionally, wwe discuss the limitations of existing benchmarks, not disentangling Base Ability from true long-context capabilities." } ] } ], "index": 2 }, { "bbox": [ 69, 275, 291, 638 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 69, 275, 291, 638 ], "spans": [ { "bbox": [ 69, 275, 291, 638 ], "type": "text", "content": "Long-context language models. Both open-source and closed-source state-of-the-art models now support extended context lengths of up to 128K tokens or more, including GPT-4 (Achiam et al., 2023), Gemini (Team et al., 2024), Claude (Caruccio et al., 2024), LLaMA-3 (Dubey et al., 2024), and Phi-3 (Abdin et al., 2024). These models typically achieve long-context capabilities through a combination of improved pretraining and post-training techniques. For instance, many models adopt two-stage or continued pretraining pipelines, where an initial short context window (e.g., 4K or 8K) is later extended to longer lengths (e.g., 128K) using scalable attention mechanisms such as FlashAttention (Dao et al., 2022) and optimized positional encoding schemes (Li et al., 2021; Xiong et al., 2023; Hsu et al., 2024). This trend is well-documented in recent technical reports (Yang et al., 2024; Abdin et al., 2024; Dubey et al., 2024), which highlight how careful adjustments to training schedules, data distribution, and architecture design contribute to stable performance in extreme long-context settings. Nonetheless, despite these advancements, effectively evaluating and comparing the true reasoning ability of such models in long-context scenarios remains a significant challenge in the real situations and scenarios." } ] } ], "index": 3 }, { "bbox": [ 67, 640, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 640, 291, 775 ], "spans": [ { "bbox": [ 67, 640, 291, 775 ], "type": "text", "content": "Long context methods. Many studies have explored methods to extend the context window length of models during fine-tuning, with some approaches even achieving this without fine-tuning. Techniques such as Position interpolation (PI) (Chen et al., 2023a), NTK (Peng and Quesnelle, 2023), YaRN (Peng et al., 2023) and SelfExtend (Jin et al., 2024) manipulate RoPE (Rotary Position Embedding) (Su et al., 2024) to do length extension. Other methods, including Retrievers (Xu" } ] } ], "index": 4 }, { "bbox": [ 302, 71, 526, 232 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 526, 232 ], "spans": [ { "bbox": [ 302, 71, 526, 232 ], "type": "text", "content": "et al., 2023), StreamingLLM (Xiao et al., 2023b), LM-Infinite (Han et al., 2024), Longlora (Chen et al., 2023b), Inf-LLM (Xiao et al., 2024) and Landmark (Mohtashami and Jaggi, 2023), focus on designing new attention architectures or exploiting specific phenomena in attention mechanisms (Sun et al., 2024) to achieve length extension. Additionally, some works (Jiang et al., 2023; Li et al., 2023b) focus on reducing length extension to length compression via a summarization step, where long contexts are compressed or summarized before being processed by the model." } ] } ], "index": 5 }, { "bbox": [ 302, 236, 526, 452 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 236, 526, 452 ], "spans": [ { "bbox": [ 302, 236, 526, 452 ], "type": "text", "content": "Long-context benchmarks. LongBench (Bai et al., 2023) and L-Eval (An et al., 2023) are early benchmarks for evaluating long-context capabilities. Later benchmarks, such as " }, { "bbox": [ 302, 236, 526, 452 ], "type": "inline_equation", "content": "\\infty" }, { "bbox": [ 302, 236, 526, 452 ], "type": "text", "content": "-Bench (Zhang et al., 2024), extended the context length of datasets further. Subsequently, synthetic task-related benchmarks like NIAH(Needle In A Haystack), and Ruler (Hsieh et al., 2024) emerged, focusing not only on evaluating contextual capabilities but also on examining models' sensitivity to the positional appearance of text. More recently, benchmarks such as HELMET (Yen et al., 2024) and LVEval (Yuan et al., 2024) introduced controllable context lengths and LLM-based metrics. Building on them, this work further considers prior model knowledge, and introduces a novel metric." } ] } ], "index": 6 }, { "bbox": [ 302, 464, 381, 476 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 464, 381, 476 ], "spans": [ { "bbox": [ 302, 464, 381, 476 ], "type": "text", "content": "6 Conclusion" } ] } ], "index": 7 }, { "bbox": [ 302, 491, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 491, 526, 775 ], "spans": [ { "bbox": [ 302, 491, 526, 775 ], "type": "text", "content": "Our benchmark and metric address key shortcomings in current evaluation methodologies, such as the inability to isolate long-context reasoning from baseline performance and reliance on insufficiently representative tasks. By incorporating real-world data, diverse task types and difficulties, and a novel metric (LongScore), Long-Bench provides a robust platform to evaluate and compare LLMs across varying context lengths. This allows for a deeper understanding of how models handle extended contexts while minimizing the influence of prior knowledge or base abilities. As LLMs continue to evolve, the ability to rigorously assess their long-context reasoning will play a critical role in identifying bottlenecks and guiding the design of next-generation models. Our approach sets a new standard for assessing LLMs, paving the way for more robust innovations in long-context evaluation. Furthermore, it will provide an actionable insight for optimizing model architectures and training strategies to enhance long-context capabilities." } ] } ], "index": 8 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17568" } ] } ], "index": 9 } ], "page_size": [ 595, 841 ], "page_idx": 8 }, { "para_blocks": [ { "bbox": [ 68, 71, 131, 84 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 71, 131, 84 ], "spans": [ { "bbox": [ 68, 71, 131, 84 ], "type": "text", "content": "Limitations" } ] } ], "index": 0 }, { "bbox": [ 66, 93, 293, 309 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 93, 293, 309 ], "spans": [ { "bbox": [ 66, 93, 293, 309 ], "type": "text", "content": "The proposed metric requires models to demonstrate relatively strong base ability on the task. If a model's base ability is insufficient, subsequent evaluations of long-context capabilities may exhibit significant fluctuations, making it less effective for comparing models' long-context performance. Besides, when constructing the benchmark, it is necessary to select articles of varying lengths to assemble into noisy contexts. For shorter target lengths, such as 2k tokens, the selected articles should also have shorter lengths — preferably less than 1k tokens — to ensure the context can be formed with two or more documents. Therefore, it is essential to collect texts of diverse lengths, particularly shorter ones, to enable effective assembly of the desired contexts." } ] } ], "index": 1 }, { "bbox": [ 67, 320, 170, 334 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 320, 170, 334 ], "spans": [ { "bbox": [ 67, 320, 170, 334 ], "type": "text", "content": "Acknowledgements" } ] } ], "index": 2 }, { "bbox": [ 66, 343, 291, 491 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 343, 291, 491 ], "spans": [ { "bbox": [ 66, 343, 291, 491 ], "type": "text", "content": "This research was partially supported by NSF Awards OAC-2117439. Further, this work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University (CWRU). We give special thanks to the CWRU HPC team for their prompt and professional help and maintenance. The views and conclusions in this paper are those of the authors and do not represent the views of any funding or supporting agencies." } ] } ], "index": 3 }, { "bbox": [ 68, 514, 127, 527 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 514, 127, 527 ], "spans": [ { "bbox": [ 68, 514, 127, 527 ], "type": "text", "content": "References" } ] } ], "index": 4 }, { "bbox": [ 69, 534, 291, 774 ], "type": "list", "angle": 0, "index": 9, "blocks": [ { "bbox": [ 69, 534, 291, 602 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 534, 291, 602 ], "spans": [ { "bbox": [ 69, 534, 291, 602 ], "type": "text", "content": "Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219." } ] } ], "index": 5 }, { "bbox": [ 69, 611, 291, 666 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 611, 291, 666 ], "spans": [ { "bbox": [ 69, 611, 291, 666 ], "type": "text", "content": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774." } ] } ], "index": 6 }, { "bbox": [ 69, 676, 291, 731 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 676, 291, 731 ], "spans": [ { "bbox": [ 69, 676, 291, 731 ], "type": "text", "content": "Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Luis Rosias, Stephanie Chan, Biao Zhang, Ankesh Anand, Zaheer Abbas, Azade Nova, et al. 2024. Many-shot in-context learning. arXiv preprint arXiv:2404.11018." } ] } ], "index": 7 }, { "bbox": [ 69, 740, 290, 774 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 740, 290, 774 ], "spans": [ { "bbox": [ 69, 740, 290, 774 ], "type": "text", "content": "Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized" } ] } ], "index": 8 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 526, 773 ], "type": "list", "angle": 0, "index": 21, "blocks": [ { "bbox": [ 314, 72, 524, 95 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 314, 72, 524, 95 ], "spans": [ { "bbox": [ 314, 72, 524, 95 ], "type": "text", "content": "evaluation for long context language models. arXiv preprint arXiv:2307.11088." } ] } ], "index": 10 }, { "bbox": [ 305, 106, 526, 162 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 305, 106, 526, 162 ], "spans": [ { "bbox": [ 305, 106, 526, 162 ], "type": "text", "content": "Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508." } ] } ], "index": 11 }, { "bbox": [ 304, 173, 526, 249 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 173, 526, 249 ], "spans": [ { "bbox": [ 304, 173, 526, 249 ], "type": "text", "content": "Loredana Caruccio, Stefano Cirillo, Giuseppe Polese, Giandomenico Solimando, Shanmugam Sundaramurthy, and Genoveffa Tortora. 2024. Claude 2.0 large language model: Tackling a real-world classification problem with a new iterative prompt engineering approach. Intelligent Systems with Applications, 21:200336." } ] } ], "index": 12 }, { "bbox": [ 304, 262, 525, 306 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 262, 525, 306 ], "spans": [ { "bbox": [ 304, 262, 525, 306 ], "type": "text", "content": "Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023a. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595." } ] } ], "index": 13 }, { "bbox": [ 304, 317, 526, 362 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 317, 526, 362 ], "spans": [ { "bbox": [ 304, 317, 526, 362 ], "type": "text", "content": "Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023b. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307." } ] } ], "index": 14 }, { "bbox": [ 304, 373, 525, 428 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 373, 525, 428 ], "spans": [ { "bbox": [ 304, 373, 525, 428 ], "type": "text", "content": "Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344-16359." } ] } ], "index": 15 }, { "bbox": [ 304, 440, 525, 497 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 440, 525, 497 ], "spans": [ { "bbox": [ 304, 440, 525, 497 ], "type": "text", "content": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783." } ] } ], "index": 16 }, { "bbox": [ 304, 507, 526, 552 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 507, 526, 552 ], "spans": [ { "bbox": [ 304, 507, 526, 552 ], "type": "text", "content": "Yongqi Fan, Hongli Sun, Kui Xue, Xiaofan Zhang, Shaoting Zhang, and Tong Ruan. 2024. Medodyssey: A medical domain benchmark for long context evaluation up to 200k tokens. Preprint, arXiv:2406.15019." } ] } ], "index": 17 }, { "bbox": [ 304, 563, 525, 607 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 563, 525, 607 ], "spans": [ { "bbox": [ 304, 563, 525, 607 ], "type": "text", "content": "Tianyu Gao, Alexander Wettig, Howard Yen, and Danqi Chen. 2024. How to train long-context language models (effectively). arXiv preprint arXiv:2410.02660." } ] } ], "index": 18 }, { "bbox": [ 304, 618, 526, 708 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 618, 526, 708 ], "spans": [ { "bbox": [ 304, 618, 526, 708 ], "type": "text", "content": "Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. Lm-infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008." } ] } ], "index": 19 }, { "bbox": [ 304, 719, 526, 773 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 719, 526, 773 ], "spans": [ { "bbox": [ 304, 719, 526, 773 ], "type": "text", "content": "Mourad Heddaya, Kyle MacMillan, Anup Malani, Hongyuan Mei, and Chenhao Tan. 2024. Casesumm: A large-scale dataset for long-context summarization from u.s. supreme court opinions. Preprint, arXiv:2501.00097." } ] } ], "index": 20 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17569" } ] } ], "index": 22 } ], "page_size": [ 595, 841 ], "page_idx": 9 }, { "para_blocks": [ { "bbox": [ 69, 72, 289, 772 ], "type": "list", "angle": 0, "index": 12, "blocks": [ { "bbox": [ 69, 72, 289, 127 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 72, 289, 127 ], "spans": [ { "bbox": [ 69, 72, 289, 127 ], "type": "text", "content": "Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, Yang Zhang, and Boris Ginsburg. 2024. Ruler: What's the real context size of your long-context language models? arXiv preprint arXiv:2404.06654." } ] } ], "index": 0 }, { "bbox": [ 69, 138, 289, 193 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 138, 289, 193 ], "spans": [ { "bbox": [ 69, 138, 289, 193 ], "type": "text", "content": "Pin-Lun Hsu, Yun Dai, Vignesh Kothapalli, Qingquan Song, Shao Tang, Siyu Zhu, Steven Shimizu, Shivam Sahni, Haowen Ning, and Yanning Chen. 2024. Liger kernel: Efficient triton kernels for lIm training. arXiv preprint arXiv:2410.10989." } ] } ], "index": 1 }, { "bbox": [ 69, 204, 289, 259 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 204, 289, 259 ], "spans": [ { "bbox": [ 69, 204, 289, 259 ], "type": "text", "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839." } ] } ], "index": 2 }, { "bbox": [ 69, 270, 289, 323 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 270, 289, 323 ], "spans": [ { "bbox": [ 69, 270, 289, 323 ], "type": "text", "content": "Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. 2024. Llm maybe longlm: Self-extend llm context window without tuning. Preprint, arXiv:2401.01325." } ] } ], "index": 3 }, { "bbox": [ 69, 336, 289, 379 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 336, 289, 379 ], "spans": [ { "bbox": [ 69, 336, 289, 379 ], "type": "text", "content": "Jiaqi Li, Mengmeng Wang, Zilong Zheng, and Muhan Zhang. 2023a. Loogle: Can long-context language models understand long contexts? arXiv preprint arXiv:2311.04939." } ] } ], "index": 4 }, { "bbox": [ 69, 391, 289, 433 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 391, 289, 433 ], "spans": [ { "bbox": [ 69, 391, 289, 433 ], "type": "text", "content": "Shenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, and Yang You. 2021. Sequence parallelism: Long sequence training from system perspective. arXiv preprint arXiv:2105.13120." } ] } ], "index": 5 }, { "bbox": [ 69, 444, 289, 488 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 444, 289, 488 ], "spans": [ { "bbox": [ 69, 444, 289, 488 ], "type": "text", "content": "Yucheng Li, Bo Dong, Chenghua Lin, and Frank Guerin. 2023b. Compressing context to enhance inference efficiency of large language models. arXiv preprint arXiv:2310.06201." } ] } ], "index": 6 }, { "bbox": [ 69, 500, 289, 543 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 500, 289, 543 ], "spans": [ { "bbox": [ 69, 500, 289, 543 ], "type": "text", "content": "Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. 2024. Chain of thought empowers transformers to solve inherently serial problems. arXiv preprint arXiv:2402.12875." } ] } ], "index": 7 }, { "bbox": [ 69, 555, 289, 587 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 555, 289, 587 ], "spans": [ { "bbox": [ 69, 555, 289, 587 ], "type": "text", "content": "Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. 2024a. World model on million-length video and language with blockwise ringattention. CoRR." } ] } ], "index": 8 }, { "bbox": [ 69, 598, 289, 653 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 598, 289, 653 ], "spans": [ { "bbox": [ 69, 598, 289, 653 ], "type": "text", "content": "Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024b. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173." } ] } ], "index": 9 }, { "bbox": [ 69, 664, 289, 719 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 664, 289, 719 ], "spans": [ { "bbox": [ 69, 664, 289, 719 ], "type": "text", "content": "Yi Lu, Jing Nathan Yan, Songlin Yang, Justin T Chiu, Siyu Ren, Fei Yuan, Wenting Zhao, Zhiyong Wu, and Alexander M Rush. 2024. A controlled study on long context extension and generalization in llms. arXiv preprint arXiv:2409.12181." } ] } ], "index": 10 }, { "bbox": [ 69, 729, 289, 772 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 729, 289, 772 ], "spans": [ { "bbox": [ 69, 729, 289, 772 ], "type": "text", "content": "Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300." } ] } ], "index": 11 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 524, 774 ], "type": "list", "angle": 0, "index": 25, "blocks": [ { "bbox": [ 304, 72, 524, 116 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 524, 116 ], "spans": [ { "bbox": [ 304, 72, 524, 116 ], "type": "text", "content": "Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended " }, { "bbox": [ 304, 72, 524, 116 ], "type": "inline_equation", "content": "(8k+)" }, { "bbox": [ 304, 72, 524, 116 ], "type": "text", "content": " context size without any fine-tuning and minimal perplexity degradation." } ] } ], "index": 13 }, { "bbox": [ 304, 128, 524, 171 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 128, 524, 171 ], "spans": [ { "bbox": [ 304, 128, 524, 171 ], "type": "text", "content": "Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071." } ] } ], "index": 14 }, { "bbox": [ 304, 185, 524, 228 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 185, 524, 228 ], "spans": [ { "bbox": [ 304, 185, 524, 228 ], "type": "text", "content": "Mingyang Song, Mao Zheng, and Xuan Luo. 2024. Counting-stars: A multi-evidence, position-aware, and scalable benchmark for evaluating long-context large language models. Preprint, arXiv:2403.11802." } ] } ], "index": 15 }, { "bbox": [ 304, 240, 524, 284 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 240, 524, 284 ], "spans": [ { "bbox": [ 304, 240, 524, 284 ], "type": "text", "content": "Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063." } ] } ], "index": 16 }, { "bbox": [ 304, 296, 524, 329 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 296, 524, 329 ], "spans": [ { "bbox": [ 304, 296, 524, 329 ], "type": "text", "content": "Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. 2024. Massive activations in large language models. arXiv preprint arXiv:2402.17762." } ] } ], "index": 17 }, { "bbox": [ 304, 341, 524, 407 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 341, 524, 407 ], "spans": [ { "bbox": [ 304, 341, 524, 407 ], "type": "text", "content": "Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530." } ] } ], "index": 18 }, { "bbox": [ 304, 418, 524, 462 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 418, 524, 462 ], "spans": [ { "bbox": [ 304, 418, 524, 462 ], "type": "text", "content": "Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, and Kai Chen. 2024. Ada-level: Evaluating long-context llms with length-adaptable benchmarks. Preprint, arXiv:2404.06480." } ] } ], "index": 19 }, { "bbox": [ 304, 474, 524, 539 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 474, 524, 539 ], "spans": [ { "bbox": [ 304, 474, 524, 539 ], "type": "text", "content": "Chaojun Xiao, Pangle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2024. Inflamm: Training-free long-context extrapolation for llms with an efficient context memory. In The Thirty-eighth Annual Conference on Neural Information Processing Systems." } ] } ], "index": 20 }, { "bbox": [ 304, 552, 524, 585 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 552, 524, 585 ], "spans": [ { "bbox": [ 304, 552, 524, 585 ], "type": "text", "content": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023a. Efficient streaming language models with attention sinks. arXiv." } ] } ], "index": 21 }, { "bbox": [ 304, 597, 524, 640 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 597, 524, 640 ], "spans": [ { "bbox": [ 304, 597, 524, 640 ], "type": "text", "content": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023b. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453." } ] } ], "index": 22 }, { "bbox": [ 304, 652, 524, 708 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 652, 524, 708 ], "spans": [ { "bbox": [ 304, 652, 524, 708 ], "type": "text", "content": "Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, et al. 2023. Effective long-context scaling of foundation models. arXiv preprint arXiv:2309.16039." } ] } ], "index": 23 }, { "bbox": [ 304, 719, 524, 774 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 719, 524, 774 ], "spans": [ { "bbox": [ 304, 719, 524, 774 ], "type": "text", "content": "Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, and Bryan Catanzaro. 2023. Retrieval meets long context large language models. arXiv preprint arXiv:2310.03025." } ] } ], "index": 24 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 284, 781, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 781, 312, 791 ], "spans": [ { "bbox": [ 284, 781, 312, 791 ], "type": "text", "content": "17570" } ] } ], "index": 26 } ], "page_size": [ 595, 841 ], "page_idx": 10 }, { "para_blocks": [ { "bbox": [ 69, 72, 291, 393 ], "type": "list", "angle": 0, "index": 5, "blocks": [ { "bbox": [ 69, 72, 291, 117 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 72, 291, 117 ], "spans": [ { "bbox": [ 69, 72, 291, 117 ], "type": "text", "content": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671." } ] } ], "index": 0 }, { "bbox": [ 69, 125, 291, 180 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 125, 291, 180 ], "spans": [ { "bbox": [ 69, 125, 291, 180 ], "type": "text", "content": "Howard Yen, Tianyu Gao, Minmin Hou, Ke Ding, Daniel Fleischer, Peter Izsak, Moshe Wasserblat, and Danqi Chen. 2024. Helmet: How to evaluate long-context language models effectively and thoroughly. arXiv preprint arXiv:2410.02694." } ] } ], "index": 1 }, { "bbox": [ 69, 189, 291, 243 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 189, 291, 243 ], "spans": [ { "bbox": [ 69, 189, 291, 243 ], "type": "text", "content": "Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. 2024. Yi: Open foundation models by 01. ai. arXiv preprint arXiv:2403.04652." } ] } ], "index": 2 }, { "bbox": [ 69, 252, 291, 307 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 252, 291, 307 ], "spans": [ { "bbox": [ 69, 252, 291, 307 ], "type": "text", "content": "Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, et al. 2024. Lv-eval: A balanced long-context benchmark with 5 length levels up to 256k. arXiv preprint arXiv:2402.05136." } ] } ], "index": 3 }, { "bbox": [ 69, 316, 291, 393 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 316, 291, 393 ], "spans": [ { "bbox": [ 69, 316, 291, 393 ], "type": "text", "content": "Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Hao, Xu Han, Zhen Thai, Shuo Wang, Zhiyuan Liu, et al. 2024. Bench: Extending long context evaluation beyond 100k tokens. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15262-15277." } ] } ], "index": 4 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 284, 780, 311, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 311, 791 ], "spans": [ { "bbox": [ 284, 780, 311, 791 ], "type": "text", "content": "17571" } ] } ], "index": 6 } ], "page_size": [ 595, 841 ], "page_idx": 11 }, { "para_blocks": [ { "bbox": [ 68, 71, 142, 84 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 71, 142, 84 ], "spans": [ { "bbox": [ 68, 71, 142, 84 ], "type": "text", "content": "A Appendix" } ] } ], "index": 0 }, { "bbox": [ 68, 93, 289, 119 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 93, 289, 119 ], "spans": [ { "bbox": [ 68, 93, 289, 119 ], "type": "text", "content": "A.1 Results of models' long-text enhancement methods on Longbench" } ] } ], "index": 1 }, { "bbox": [ 67, 124, 291, 259 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 124, 291, 259 ], "spans": [ { "bbox": [ 67, 124, 291, 259 ], "type": "text", "content": "These section introduces four long-context enhancement method's performances on three Long-Bench tasks. The colored dashed lines represent the average score of each model on the corresponding task. The size of the markers corresponds to the proportion of each text length within the entire dataset. The larger the marker, the higher the proportion. The results exhibit significant variation across tasks of different lengths within the same dataset. All results are in Appendix A.1." } ] } ], "index": 2 }, { "bbox": [ 67, 269, 286, 282 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 269, 286, 282 ], "spans": [ { "bbox": [ 67, 269, 286, 282 ], "type": "text", "content": "A.2 Details about how to construct each task" } ] } ], "index": 3 }, { "bbox": [ 67, 287, 291, 515 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 287, 291, 515 ], "spans": [ { "bbox": [ 67, 287, 291, 515 ], "type": "text", "content": "KV Retrieval. This task primarily evaluates the model's ability to extract critical information while ignoring irrelevant content and noisy information. (1) Context Construction: Three pairs of key-value " }, { "bbox": [ 67, 287, 291, 515 ], "type": "inline_equation", "content": "(k_{1}, v_{1}; k_{2}, v_{2}; k_{3}, v_{3})" }, { "bbox": [ 67, 287, 291, 515 ], "type": "text", "content": " are generated using UUIDs. The value of the previous pair serves as the key for the subsequent pair (" }, { "bbox": [ 67, 287, 291, 515 ], "type": "inline_equation", "content": "v_{1} = k_{2}" }, { "bbox": [ 67, 287, 291, 515 ], "type": "text", "content": "; " }, { "bbox": [ 67, 287, 291, 515 ], "type": "inline_equation", "content": "v_{2} = k_{3}" }, { "bbox": [ 67, 287, 291, 515 ], "type": "text", "content": "). These key-value pairs are randomly inserted into different noisy contexts. The noise introduces irrelevant or distracting information, simulating real-world challenges. (2) Question Setup: The question asks the model to identify the value corresponding to a specific key. (3) Evaluation Metric: The task is evaluated using accuracy (Acc). If the model correctly identifies the value associated with the queried key, its accuracy score is incremented by one." } ] } ], "index": 4 }, { "bbox": [ 67, 518, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 518, 291, 775 ], "spans": [ { "bbox": [ 67, 518, 291, 775 ], "type": "text", "content": "Counting Stars. Following (Song et al., 2024), this task assesses the model's ability to extract critical information across multiple documents, maintain the correct sequence when aggregating information and resist distractions from misleading or altered options. (1) Context Construction: Four noisy context passages are selected from all noisy context passages and each passage is appended with a sentence in the format: The little penguin counted " }, { "bbox": [ 67, 518, 291, 775 ], "type": "inline_equation", "content": "N \\star" }, { "bbox": [ 67, 518, 291, 775 ], "type": "text", "content": ", where " }, { "bbox": [ 67, 518, 291, 775 ], "type": "inline_equation", "content": "N" }, { "bbox": [ 67, 518, 291, 775 ], "type": "text", "content": " represents a specific number of stars counted in that passage. (2) Question Setup: The model is tasked with identifying the sequence of star counts in the order of sentence appearance, like [38, 10, 90, 42]. The task provides multiple-choice options, including the correct sequence and several distractors. Distractors are generated by swapping numbers, modifying values, or changing the order to increase difficulty. (3) Evaluation Metric: The task is evaluated using accuracy (Acc). If the model" } ] } ], "index": 5 }, { "bbox": [ 302, 71, 524, 97 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 524, 97 ], "spans": [ { "bbox": [ 302, 71, 524, 97 ], "type": "text", "content": "selects the correct sequence, its accuracy score is incremented by one." } ] } ], "index": 6 }, { "bbox": [ 302, 99, 526, 300 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 99, 526, 300 ], "spans": [ { "bbox": [ 302, 99, 526, 300 ], "type": "text", "content": "Passage Retrieval. By focusing on comprehension and recognition, this task challenges the model's ability to extract and correlate key information in a multi-document setting. (1) Context Construction: A single data sample comprises multiple articles, each sourced from a distinct domain. These articles are concatenated to form the context. (2) Question Setup: The model is provided with the summary of one specific article from the context. The task is to identify which article in the context corresponds to the given summary. (3) Evaluation Metric: The task is evaluated using accuracy (Acc). If the model correctly identifies the article corresponding to the summary, its accuracy score is incremented by one." } ] } ], "index": 7 }, { "bbox": [ 302, 301, 525, 490 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 301, 525, 490 ], "spans": [ { "bbox": [ 302, 301, 525, 490 ], "type": "text", "content": "Passage Count. The task assesses a model's ability to understand and integrate global key information by determining the number of unique articles within a multi-article context. (1) Context Construction: Each data sample comprises multiple articles sourced from different domains. Some articles are repeated multiple times within the context to add redundancy and complexity. (2) Question Setup: The model is tasked with identifying the total number of unique (non-repeated) articles in the context. (3) Evaluation Metric: The task is evaluated using accuracy (Acc). If the model correctly identifies the count of unique articles, its accuracy score is incremented by one." } ] } ], "index": 8 }, { "bbox": [ 302, 491, 525, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 491, 525, 774 ], "spans": [ { "bbox": [ 302, 491, 525, 774 ], "type": "text", "content": "Single-Doc QA. The task evaluates a model's ability to answer questions specific to a single article within a multi-article context. (1) Context Construction: Each data sample consists of multiple articles from different domains. A specific question is posed about one particular article within the context. (2) Evaluation Metric: The model's answers are assessed using another large language model (like GPT-4o-mini). Evaluation is based on two dimensions: Fluency is scored on a 3-point scale (0, 1, 2), evaluating the coherence and readability of the answer. Correctness is scored on a 4-point scale (0, 1, 2, 3), assessing the factual accuracy of the response in relation to the context. The final score is calculated as the product of the Fluency and Correctness scores: Final Score = Fluency × Correctness (3) Prior Knowledge Filtering: To filter out the model's prior knowledge, we introduce a filtering process. In a no-context scenario, if the model's response score exceeds a certain threshold, it indicates that the" } ] } ], "index": 9 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17572" } ] } ], "index": 10 } ], "page_size": [ 595, 841 ], "page_idx": 12 }, { "para_blocks": [ { "type": "image", "bbox": [ 71, 106, 226, 183 ], "blocks": [ { "bbox": [ 71, 106, 226, 183 ], "lines": [ { "bbox": [ 71, 106, 226, 183 ], "spans": [ { "bbox": [ 71, 106, 226, 183 ], "type": "image", "image_path": "f99ef0c5621cf23d03875b2cb9dc985e05c3a5e5ae0951b3a46e34d02e4c6397.jpg" } ] } ], "index": 0, "angle": 0, "type": "image_body" }, { "bbox": [ 149, 194, 443, 206 ], "lines": [ { "bbox": [ 149, 194, 443, 206 ], "spans": [ { "bbox": [ 149, 194, 443, 206 ], "type": "text", "content": "Figure 9: Illustration of NTK's performances on three LongBench tasks." } ] } ], "index": 3, "angle": 0, "type": "image_caption" } ], "index": 0 }, { "type": "image", "bbox": [ 228, 106, 375, 184 ], "blocks": [ { "bbox": [ 228, 106, 375, 184 ], "lines": [ { "bbox": [ 228, 106, 375, 184 ], "spans": [ { "bbox": [ 228, 106, 375, 184 ], "type": "image", "image_path": "ac5d571dcfffc15f33a14d72d9e26b8ba9014c4367ed33988c6e4fc68979698d.jpg" } ] } ], "index": 1, "angle": 0, "type": "image_body" } ], "index": 1 }, { "type": "image", "bbox": [ 377, 107, 523, 184 ], "blocks": [ { "bbox": [ 377, 107, 523, 184 ], "lines": [ { "bbox": [ 377, 107, 523, 184 ], "spans": [ { "bbox": [ 377, 107, 523, 184 ], "type": "image", "image_path": "c82b4cb133e636120d089bc53ce2e7e398e16e1d303c069a7b46181fa25aed36.jpg" } ] } ], "index": 2, "angle": 0, "type": "image_body" } ], "index": 2 }, { "type": "image", "bbox": [ 71, 220, 225, 296 ], "blocks": [ { "bbox": [ 71, 220, 225, 296 ], "lines": [ { "bbox": [ 71, 220, 225, 296 ], "spans": [ { "bbox": [ 71, 220, 225, 296 ], "type": "image", "image_path": "1e3439e1b797a5aadba164c0b6d32092f8ef0f8eccc0a6a48dcb408559166b18.jpg" } ] } ], "index": 4, "angle": 0, "type": "image_body" }, { "bbox": [ 153, 308, 440, 321 ], "lines": [ { "bbox": [ 153, 308, 440, 321 ], "spans": [ { "bbox": [ 153, 308, 440, 321 ], "type": "text", "content": "Figure 10: Illustration of PI's performances on three LongBench tasks." } ] } ], "index": 7, "angle": 0, "type": "image_caption" } ], "index": 4 }, { "type": "image", "bbox": [ 228, 219, 374, 296 ], "blocks": [ { "bbox": [ 228, 219, 374, 296 ], "lines": [ { "bbox": [ 228, 219, 374, 296 ], "spans": [ { "bbox": [ 228, 219, 374, 296 ], "type": "image", "image_path": "8c6f97f3967fdc7927b146a5a3cbf0918c87f739a255cb1c4d52093665479aa3.jpg" } ] } ], "index": 5, "angle": 0, "type": "image_body" } ], "index": 5 }, { "type": "image", "bbox": [ 377, 220, 523, 296 ], "blocks": [ { "bbox": [ 377, 220, 523, 296 ], "lines": [ { "bbox": [ 377, 220, 523, 296 ], "spans": [ { "bbox": [ 377, 220, 523, 296 ], "type": "image", "image_path": "142964e7ffa7b1b6e4574d23a6c1b0b6aa9d925fc841615005a5f3dfc33b116e.jpg" } ] } ], "index": 6, "angle": 0, "type": "image_body" } ], "index": 6 }, { "type": "image", "bbox": [ 71, 333, 225, 409 ], "blocks": [ { "bbox": [ 71, 333, 225, 409 ], "lines": [ { "bbox": [ 71, 333, 225, 409 ], "spans": [ { "bbox": [ 71, 333, 225, 409 ], "type": "image", "image_path": "5a1e69753eadfbd2b655655bda96d79556c65bfda75b65420a010ac92e16cca0.jpg" } ] } ], "index": 8, "angle": 0, "type": "image_body" }, { "bbox": [ 145, 420, 447, 433 ], "lines": [ { "bbox": [ 145, 420, 447, 433 ], "spans": [ { "bbox": [ 145, 420, 447, 433 ], "type": "text", "content": "Figure 11: Illustration of YaRN's performances on three LongBench tasks." } ] } ], "index": 11, "angle": 0, "type": "image_caption" } ], "index": 8 }, { "type": "image", "bbox": [ 229, 333, 374, 409 ], "blocks": [ { "bbox": [ 229, 333, 374, 409 ], "lines": [ { "bbox": [ 229, 333, 374, 409 ], "spans": [ { "bbox": [ 229, 333, 374, 409 ], "type": "image", "image_path": "f395604b6330fa52c011242d09308e1caa1c8257832e4671e55819ddf45f97d9.jpg" } ] } ], "index": 9, "angle": 0, "type": "image_body" } ], "index": 9 }, { "type": "image", "bbox": [ 377, 334, 523, 409 ], "blocks": [ { "bbox": [ 377, 334, 523, 409 ], "lines": [ { "bbox": [ 377, 334, 523, 409 ], "spans": [ { "bbox": [ 377, 334, 523, 409 ], "type": "image", "image_path": "8355289f9e8c0cc5fe9e7a1e08776acf9c2dccd5400d7fc098a8d031e185440f.jpg" } ] } ], "index": 10, "angle": 0, "type": "image_body" } ], "index": 10 }, { "type": "image", "bbox": [ 71, 445, 225, 522 ], "blocks": [ { "bbox": [ 71, 445, 225, 522 ], "lines": [ { "bbox": [ 71, 445, 225, 522 ], "spans": [ { "bbox": [ 71, 445, 225, 522 ], "type": "image", "image_path": "efc844bfca90bf176802363f05d2414370e146e021e45b420845de2257a3c621.jpg" } ] } ], "index": 12, "angle": 0, "type": "image_body" }, { "bbox": [ 139, 534, 452, 546 ], "lines": [ { "bbox": [ 139, 534, 452, 546 ], "spans": [ { "bbox": [ 139, 534, 452, 546 ], "type": "text", "content": "Figure 12: Illustration of Longlora's performances on three LongBench tasks." } ] } ], "index": 15, "angle": 0, "type": "image_caption" } ], "index": 12 }, { "type": "image", "bbox": [ 229, 446, 374, 522 ], "blocks": [ { "bbox": [ 229, 446, 374, 522 ], "lines": [ { "bbox": [ 229, 446, 374, 522 ], "spans": [ { "bbox": [ 229, 446, 374, 522 ], "type": "image", "image_path": "feea94e022073006b88c91060790ee47ccb5bdeae684d62c8806a41ff8387a5b.jpg" } ] } ], "index": 13, "angle": 0, "type": "image_body" } ], "index": 13 }, { "type": "image", "bbox": [ 377, 446, 523, 522 ], "blocks": [ { "bbox": [ 377, 446, 523, 522 ], "lines": [ { "bbox": [ 377, 446, 523, 522 ], "spans": [ { "bbox": [ 377, 446, 523, 522 ], "type": "image", "image_path": "e9e1832f46322c916af05dc4470d02d7e9203955f300fe252169b5a7d08bc3a8.jpg" } ] } ], "index": 14, "angle": 0, "type": "image_body" } ], "index": 14 }, { "type": "image", "bbox": [ 70, 557, 227, 665 ], "blocks": [ { "bbox": [ 70, 557, 227, 665 ], "lines": [ { "bbox": [ 70, 557, 227, 665 ], "spans": [ { "bbox": [ 70, 557, 227, 665 ], "type": "image", "image_path": "b6c225b2a43977880a253e8054447c793b34ecf020f57703d4f6a97327e9c1ce.jpg" } ] } ], "index": 16, "angle": 0, "type": "image_body" }, { "bbox": [ 67, 674, 526, 735 ], "lines": [ { "bbox": [ 67, 674, 526, 735 ], "spans": [ { "bbox": [ 67, 674, 526, 735 ], "type": "text", "content": "Figure 13: Verification the reliability of LongBench: results of two models of different sizes from the same LM family tree, showcasing their scores in different tasks across various context lengths. One color represents a specific task, with solid lines indicating larger models and dashed lines representing smaller models. The results of different LMs from the same LM family tree basically validate the general trend: the larger model tends to get a higher score while the score decreases as the context length increases." } ] } ], "index": 19, "angle": 0, "type": "image_caption" } ], "index": 16 }, { "type": "image", "bbox": [ 228, 558, 374, 665 ], "blocks": [ { "bbox": [ 228, 558, 374, 665 ], "lines": [ { "bbox": [ 228, 558, 374, 665 ], "spans": [ { "bbox": [ 228, 558, 374, 665 ], "type": "image", "image_path": "0c550797a00a77b6bec9db237e659b97c2b021f1fa603ac1decfffc07b7d6e89.jpg" } ] } ], "index": 17, "angle": 0, "type": "image_body" } ], "index": 17 }, { "type": "image", "bbox": [ 377, 558, 523, 665 ], "blocks": [ { "bbox": [ 377, 558, 523, 665 ], "lines": [ { "bbox": [ 377, 558, 523, 665 ], "spans": [ { "bbox": [ 377, 558, 523, 665 ], "type": "image", "image_path": "bb9f1459b5a92cdd1318fb825908359003db753dfa229a95cccd74ec5cd3dea4.jpg" } ] } ], "index": 18, "angle": 0, "type": "image_body" } ], "index": 18 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17573" } ] } ], "index": 20 } ], "page_size": [ 595, 841 ], "page_idx": 13 }, { "para_blocks": [ { "bbox": [ 67, 71, 289, 97 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 71, 289, 97 ], "spans": [ { "bbox": [ 67, 71, 289, 97 ], "type": "text", "content": "model is relying on prior knowledge. In such cases, the data is excluded from the statistical analysis." } ] } ], "index": 0 }, { "bbox": [ 67, 98, 290, 286 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 98, 290, 286 ], "spans": [ { "bbox": [ 67, 98, 290, 286 ], "type": "text", "content": "Multi-Doc QA. The task evaluates a model's ability to integrate information from multiple articles and provide coherent, accurate answers to questions that require a global understanding of the context. (1) Context Construction: Each data sample contains multiple articles from different domains. The question posed requires the model to synthesize information across multiple articles to generate the correct answer. (2) Evaluation Metric: Similar to the Single-Doc QA task, the model's answers are evaluated using another large language model and evaluated by the same dimensions. (3) Prior Knowledge Filtering is similar to the Single-Doc QA task." } ] } ], "index": 1 }, { "bbox": [ 67, 288, 291, 584 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 288, 291, 584 ], "spans": [ { "bbox": [ 67, 288, 291, 584 ], "type": "text", "content": "Single-Doc Sum. The task evaluates a model's ability to generate concise and accurate summaries for a specific article within a multi-article context. (1) Context Construction: Each data sample consists of multiple articles from different domains. (2) Question Setup: The model is tasked with summarizing the content of one specific article from the context. (3) Evaluation Metric: The generated summary is assessed by another large language model. Two scoring dimensions are considered: Fluency evaluates the coherence and readability of the summary and is scored on a 2-point scale: 0 (poor fluency), 1 (good fluency). Precision measures the relevance of the summary by comparing each sentence in the model's output to the reference summary, and is calculated as Precision = " }, { "bbox": [ 67, 288, 291, 584 ], "type": "inline_equation", "content": "\\frac{\\text{Number of relevant sentences}}{\\text{Total number of sentences in the summary}}" }, { "bbox": [ 67, 288, 291, 584 ], "type": "text", "content": ". The final score is the product of these two dimensions: Final Score = Fluency × Precision. By requiring accurate and readable summaries, this task emphasizes the model's capacity for effective information synthesis and integration." } ] } ], "index": 2 }, { "bbox": [ 67, 586, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 586, 291, 775 ], "spans": [ { "bbox": [ 67, 586, 291, 775 ], "type": "text", "content": "Multi-Doc Sum. The task evaluates a model's ability to integrate information from multiple articles and produce a coherent and accurate summary of their shared content. (1) Context Construction: Each data sample consists of multiple articles from different domains. (2) Question Setup: The model is tasked with summarizing the relevant content from all provided articles. (3) Evaluation Metric: Similar to the Single-Doc Sum task, the model's answers are evaluated using another large language model and evaluated by the same dimensions. By requiring effective summarization of multi-document content, this task highlights the model's ability to synthesize and generalize infor" } ] } ], "index": 3 }, { "bbox": [ 303, 71, 439, 83 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 303, 71, 439, 83 ], "spans": [ { "bbox": [ 303, 71, 439, 83 ], "type": "text", "content": "mation across diverse sources." } ] } ], "index": 4 }, { "bbox": [ 303, 94, 455, 106 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 303, 94, 455, 106 ], "spans": [ { "bbox": [ 303, 94, 455, 106 ], "type": "text", "content": "A.3 Prompts used in each task" } ] } ], "index": 5 }, { "bbox": [ 302, 111, 525, 232 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 111, 525, 232 ], "spans": [ { "bbox": [ 302, 111, 525, 232 ], "type": "text", "content": "This section presents the prompts used in each task. Here, {context} represents the entire context constructed from articles in the noisy context sources and real context sources. {input} represents the question for the task, and {instruction} represents the model-specific instructions. For example, in Single-Doc QA, the instruction might be \"Answer the question related to Passage 1\", indicating that the question is specifically based on Passage 1." } ] } ], "index": 6 }, { "bbox": [ 302, 233, 525, 314 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 233, 525, 314 ], "spans": [ { "bbox": [ 302, 233, 525, 314 ], "type": "text", "content": "KV Retrieval. There are some passages below sourced from many different fields. " }, { "bbox": [ 302, 233, 525, 314 ], "type": "inline_equation", "content": "\\backslash \\backslash n" }, { "bbox": [ 302, 233, 525, 314 ], "type": "text", "content": " {context} " }, { "bbox": [ 302, 233, 525, 314 ], "type": "inline_equation", "content": "\\backslash \\backslash n" }, { "bbox": [ 302, 233, 525, 314 ], "type": "text", "content": " Given several key-value pairs in these passages, you need to find the value of the key. Read the question related with these key-value pairs and give the correct answer. {input}" } ] } ], "index": 7 }, { "bbox": [ 302, 315, 525, 382 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 315, 525, 382 ], "spans": [ { "bbox": [ 302, 315, 525, 382 ], "type": "text", "content": "Counting Stars. There are some passages below sourced from many different fields.\\n\\nOnly output the results without any explanation. Read the following question and give the correct answer: {input} The final answer is:" } ] } ], "index": 8 }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 383, 525, 503 ], "spans": [ { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": "Passage Retrieval. Here are some passages from many different fields, along with an summarization. Please determine which passage the summarization is from. " }, { "bbox": [ 301, 383, 525, 503 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": " {context} " }, { "bbox": [ 301, 383, 525, 503 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": " The following is a summarization. " }, { "bbox": [ 301, 383, 525, 503 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": " {input} " }, { "bbox": [ 301, 383, 525, 503 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": " Please enter the number of the passage that the summarization is from. The answer format must be like \"Passage 1\", \"Passage 2\", etc. " }, { "bbox": [ 301, 383, 525, 503 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 383, 525, 503 ], "type": "text", "content": " The answer is Passage" } ] } ], "index": 9 }, { "bbox": [ 302, 504, 525, 651 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 504, 525, 651 ], "spans": [ { "bbox": [ 302, 504, 525, 651 ], "type": "text", "content": "Passage Count. There are some paragraphs below sourced from many different fields. Some of them may be duplicates. Please carefully read these paragraphs and determine how many unique paragraphs there are after removing duplicates. In other words, how many non-repeating paragraphs are there in total? " }, { "bbox": [ 302, 504, 525, 651 ], "type": "inline_equation", "content": "\\backslash \\backslash n" }, { "bbox": [ 302, 504, 525, 651 ], "type": "text", "content": " {context} " }, { "bbox": [ 302, 504, 525, 651 ], "type": "inline_equation", "content": "\\backslash \\backslash n" }, { "bbox": [ 302, 504, 525, 651 ], "type": "text", "content": " Please enter the final count of unique paragraphs after removing duplicates. The output format should only contain the number, such as 1, 2, 3, and so on. " }, { "bbox": [ 302, 504, 525, 651 ], "type": "inline_equation", "content": "\\backslash \\backslash n" }, { "bbox": [ 302, 504, 525, 651 ], "type": "text", "content": " The final answer is:" } ] } ], "index": 10 }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 301, 653, 525, 774 ], "spans": [ { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": "Single-Doc QA. Answer the question based on the given passages. Only give me the answer and do not output any other words. " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " The following are given passages and these passages are from many different fields. " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " {context} " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " Answer the question based on the given passages following the instruction: " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " {instruction} " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " Question: {input} " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 301, 653, 525, 774 ], "type": "text", "content": " Only give me the answer and do not output any other words. Answer: " }, { "bbox": [ 301, 653, 525, 774 ], "type": "inline_equation", "content": "\\backslash n" } ] } ], "index": 11 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17574" } ] } ], "index": 12 } ], "page_size": [ 595, 841 ], "page_idx": 14 }, { "para_blocks": [ { "bbox": [ 66, 71, 290, 191 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 71, 290, 191 ], "spans": [ { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": "Multi-Doc QA. Answer the question based on the given passages. Only give me the answer and do not output any other words. " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " The following are given passages and these passages are from many different fields. " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " {context} " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " Answer the question based on the given passages following the instruction: " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " {instruction} " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " Question: {input} " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 71, 290, 191 ], "type": "text", "content": " Only give me the answer and do not output any other words. Answer: " }, { "bbox": [ 66, 71, 290, 191 ], "type": "inline_equation", "content": "\\backslash n" } ] } ], "index": 0 }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 193, 291, 382 ], "spans": [ { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": "Single-Doc Sum. You are given several passages as follows, but not all of them need to be summarized. " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " {context} " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " Please follow these instructions: " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " 1.{input} " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " 2.Import and do not summarize any passages not listed above. " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " 3.For the selected passages, the summary should include: the main arguments or conclusions of each article, the key evidence or supporting data presented and any unique or innovative points made in the passages. " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " 4.The summary should be concise, focusing only on the most important information from the passages. Now, please generate the summary for the specified passage, following the instructions carefully. " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" }, { "bbox": [ 66, 193, 291, 382 ], "type": "text", "content": " Summary: " }, { "bbox": [ 66, 193, 291, 382 ], "type": "inline_equation", "content": "\\backslash n" } ] } ], "index": 1 }, { "bbox": [ 66, 383, 291, 613 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 66, 383, 291, 613 ], "spans": [ { "bbox": [ 66, 383, 291, 613 ], "type": "text", "content": "Multi-Doc Sum. You are given several passages as follows, but not all of them need to be summarized.\\n\\nInstructions:\\n1.{input} \\n2.Import and do not summarize any passages not listed above.\\n3.All the selected passages should be summarized into a few short sentences and do not summarize each selected passages separately. The summary should include: the main arguments or conclusions of each article, the key evidence or supporting data presented and any unique or innovative points made in the passages.\\n4.The summary should be concise, focusing only on the most important information from the passages. Now, please combine and summarize the main ideas from the selected relevant passages into one cohesive summary, following the instructions carefully.\\n\\nSummary: " }, { "bbox": [ 66, 383, 291, 613 ], "type": "inline_equation", "content": "\\backslash n" } ] } ], "index": 2 }, { "bbox": [ 67, 622, 281, 649 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 622, 281, 649 ], "spans": [ { "bbox": [ 67, 622, 281, 649 ], "type": "text", "content": "A.4 Further verification of the reliability of the proposed benchmark" } ] } ], "index": 3 }, { "bbox": [ 67, 653, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 653, 291, 775 ], "spans": [ { "bbox": [ 67, 653, 291, 775 ], "type": "text", "content": "To further verify the reliability of the generated dataset, we evaluate three model families (Llama 3.2, Llama 3.1, and Phi 3), selecting two different model sizes from each family. Given that these models are from the same series but vary in size, the expected trends on the dataset are as follows: (1) Model Size Effect: Larger models should generally achieve higher scores compared to smaller models within the same series. (2) Text Length" } ] } ], "index": 4 }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 69, 526, 225 ], "spans": [ { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": "Table 7: Results of the average performance of five models across all tasks on " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "\\underline{\\underline{100}}" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": "-LongBench. Base Ability represents the model's score within lengths of " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "2k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ", " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": " and " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "6k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ". Avg score represents the average of score across lengths including " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "8k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ", " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "16k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ", " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "32k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ", " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "64k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": " and " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "128k" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": ". Avg LC represents the average of score by using our proposed metric. " }, { "bbox": [ 302, 69, 526, 225 ], "type": "inline_equation", "content": "57.4_{(1)}" }, { "bbox": [ 302, 69, 526, 225 ], "type": "text", "content": " indicates that the current model has a score of 57.4 at the given context length, with a ranking of 1. Claimed Length refers to the maximum context length that the model claims to support. Qwen 2.5-14B and Qwen 2.5-7B use YaRN to extend their context length to 128k. so, the original context length is specified in Claimed Length." } ] } ], "index": 5 }, { "type": "table", "bbox": [ 305, 225, 523, 327 ], "blocks": [ { "bbox": [ 305, 225, 523, 327 ], "lines": [ { "bbox": [ 305, 225, 523, 327 ], "spans": [ { "bbox": [ 305, 225, 523, 327 ], "type": "table", "html": "
modelClaimed LengthBase AbilityAvg scoreAvg LC
llama-3.1-70B-Instruct128K67.5(1)52.55(1)-22.18(2)
Qwen2.5-14B-Instruct32K59.1(2)40.77(3)-31.12(7)
Phi-3-128k-medium128K57.4(3)43.28(2)-24.65(4)
Qwen2.5-7B-Instruct32K57.4(4)39.80(4)-30.69(6)
Llama3.2-3B-Instruct128K51.2(8)34.81(7)-32.06(8)
Phi-3-128k-mini128K48.2(6)36.78(5)-23.85(3)
Llama-3.1-8B-Instruct128K44.0(7)36.37(6)-17.46(1)
Llama3.2-1B-Instruct128K28.7(8)20.45(8)-28.88(5)
", "image_path": "ec7c1e10264ed99cd160e5388ec2e0b5ba0e2933b5535e7cfb70eb1c9779657c.jpg" } ] } ], "index": 6, "angle": 0, "type": "table_body" } ], "index": 6 }, { "type": "image", "bbox": [ 307, 343, 521, 365 ], "blocks": [ { "bbox": [ 307, 343, 521, 365 ], "lines": [ { "bbox": [ 307, 343, 521, 365 ], "spans": [ { "bbox": [ 307, 343, 521, 365 ], "type": "image", "image_path": "66ff1f928d7b25fc8e9a6b9d239ec6e244e3fb48b7a2bea89078440ca34ef210.jpg" } ] } ], "index": 7, "angle": 0, "type": "image_body" } ], "index": 7 }, { "type": "image", "bbox": [ 314, 371, 506, 468 ], "blocks": [ { "bbox": [ 314, 371, 506, 468 ], "lines": [ { "bbox": [ 314, 371, 506, 468 ], "spans": [ { "bbox": [ 314, 371, 506, 468 ], "type": "image", "image_path": "29516794af79382e41ac2a52dd0c9d0929cee92c287b2124f5e4f665ea392015.jpg" } ] } ], "index": 8, "angle": 0, "type": "image_body" }, { "bbox": [ 302, 478, 525, 515 ], "lines": [ { "bbox": [ 302, 478, 525, 515 ], "spans": [ { "bbox": [ 302, 478, 525, 515 ], "type": "text", "content": "Figure 14: Results of eight open-source models on all tasks in 100-LongBench, showing their scores at different context lengths." } ] } ], "index": 9, "angle": 0, "type": "image_caption" } ], "index": 8 }, { "bbox": [ 302, 538, 526, 633 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 538, 526, 633 ], "spans": [ { "bbox": [ 302, 538, 526, 633 ], "type": "text", "content": "Effect: As the text length increases, the performance scores should decrease across all models. As shown in Figure 13, the results basically follow these expected trends: larger models tend to score higher, and performance decreases as text length increases. This consistent pattern indicates that the dataset generation process is accurate and reliably." } ] } ], "index": 10 }, { "bbox": [ 302, 645, 520, 672 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 645, 520, 672 ], "spans": [ { "bbox": [ 302, 645, 520, 672 ], "type": "text", "content": "A.5 Results of different Open-source models on our proposed benchmark" } ] } ], "index": 11 }, { "bbox": [ 302, 679, 526, 733 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 679, 526, 733 ], "spans": [ { "bbox": [ 302, 679, 526, 733 ], "type": "text", "content": "This section first introduces the experiments conducted using 100-LongBench and the proposed metric, aimed at evaluating the long-context capabilities of various popular open-source large models." } ] } ], "index": 12 }, { "bbox": [ 302, 735, 525, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 735, 525, 775 ], "spans": [ { "bbox": [ 302, 735, 525, 775 ], "type": "text", "content": "We select eight open-source models. For each of the eight tasks, we generated 100 samples at each context length (8k, 16k, 32k, 64k and 128k)" } ] } ], "index": 13 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "17575" } ] } ], "index": 14 } ], "page_size": [ 595, 841 ], "page_idx": 15 }, { "para_blocks": [ { "type": "table", "bbox": [ 71, 106, 522, 152 ], "blocks": [ { "bbox": [ 67, 69, 524, 105 ], "lines": [ { "bbox": [ 67, 69, 524, 105 ], "spans": [ { "bbox": [ 67, 69, 524, 105 ], "type": "text", "content": "Table 8: Performance of LLaMA 3.2-1B-Instruct with and without domain-specific tasks. We report scores across different context lengths and two average metrics: overall average and average on long contexts (32k+). Adding healthcare and law tasks leads to a slight drop in average long-context performance." } ] } ], "index": 0, "angle": 0, "type": "table_caption" }, { "bbox": [ 71, 106, 522, 152 ], "lines": [ { "bbox": [ 71, 106, 522, 152 ], "spans": [ { "bbox": [ 71, 106, 522, 152 ], "type": "table", "html": "
Benchmarkbase8k16k32k64k128kavg(score)avg(LongScore)
original24.4122.4220.5518.5417.9215.4418.97-22.27
original + healthcare & law24.5821.9718.4915.7716.6412.8317.14-30.27
", "image_path": "9d8b6e0380ef3e983279306682f16184e40bd51933756a53edef0b8acc7cd209.jpg" } ] } ], "index": 1, "angle": 0, "type": "table_body" } ], "index": 1 }, { "bbox": [ 67, 173, 291, 280 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 173, 291, 280 ], "spans": [ { "bbox": [ 67, 173, 291, 280 ], "type": "text", "content": "to obtain the scores. The model's Long-context Capability was then calculated, using the performance at " }, { "bbox": [ 67, 173, 291, 280 ], "type": "inline_equation", "content": "2k" }, { "bbox": [ 67, 173, 291, 280 ], "type": "text", "content": ", " }, { "bbox": [ 67, 173, 291, 280 ], "type": "inline_equation", "content": "4k" }, { "bbox": [ 67, 173, 291, 280 ], "type": "text", "content": ", and " }, { "bbox": [ 67, 173, 291, 280 ], "type": "inline_equation", "content": "6k" }, { "bbox": [ 67, 173, 291, 280 ], "type": "text", "content": " as the base ability. Finally, the average scores across all tasks for the five models are computed. Table 7 presents the final average results and the corresponding rankings of the five models. Figure 14 displays the average scores for all tasks at each context length for the five models." } ] } ], "index": 2 }, { "bbox": [ 67, 290, 262, 317 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 290, 262, 317 ], "spans": [ { "bbox": [ 67, 290, 262, 317 ], "type": "text", "content": "A.6 Results of models with and without domain-specific tasks" } ] } ], "index": 3 }, { "bbox": [ 67, 321, 291, 388 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 321, 291, 388 ], "spans": [ { "bbox": [ 67, 321, 291, 388 ], "type": "text", "content": "We have added long text datasets from the recommended domains (law and healthcare) to enhance the comprehensiveness of our benchmark. Evaluating the capability of LLMs to handle such domain-specific scenarios is indeed a crucial need." } ] } ], "index": 4 }, { "bbox": [ 67, 389, 291, 455 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 389, 291, 455 ], "spans": [ { "bbox": [ 67, 389, 291, 455 ], "type": "text", "content": "Specifically, we mix up CaseSumm, MedOdyssey, and Medical Summary into our original dataet. We reevaluate the performance of the LLaMA 3.2 1B-Instruct model with and without such datasets." } ] } ], "index": 5 }, { "bbox": [ 67, 456, 291, 578 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 456, 291, 578 ], "spans": [ { "bbox": [ 67, 456, 291, 578 ], "type": "text", "content": "As is shown in Table 8, incorporating healthcare and law-focused domain-specific data leads to a slight performance decline in long text scenarios, likely because the model lacks comprehensive knowledge in these specialized fields. However, the overall trend is steady. We plan to incorporate this additional evaluation to our updated manuscript and add more discussion regarding domain-specific long context evaluations." } ] } ], "index": 6 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 313, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 313, 791 ], "spans": [ { "bbox": [ 284, 780, 313, 791 ], "type": "text", "content": "17576" } ] } ], "index": 7 } ], "page_size": [ 595, 841 ], "page_idx": 16 } ], "_backend": "vlm", "_version_name": "2.6.4" }